|
$2.4B
Speech therapy tech market (2025)
|
1 in 12
Children with speech/language disorder
|
AAC
Fastest-growing AI therapy category
|
No
AI will not replace therapists
|
There is a specific problem sitting in therapy rooms right now that technology has not solved and human therapists cannot fix alone: demand for speech therapy vastly exceeds the number of trained therapists available. In the US, roughly 1 in 12 children has a speech or language disorder. The average wait time for a paediatric SLP appointment in many cities runs between 3 and 12 months. Something has to fill that gap — and AI-powered tools are increasingly the answer, at least for part of it.
But "AI in speech therapy" means very different things depending on who you are. For a therapist, it means faster session planning and smarter data tracking. For a parent, it means apps that keep a non-verbal child communicating between clinic visits. For a teacher, it means digital tools that reduce how hard it is to keep a child with articulation difficulties engaged in a 45-minute session. This article breaks down each category clearly, with the honest limits of what AI can and cannot do in this setting.
Why is technology entering speech therapy now?
The short answer: children who grow up swiping screens before they can form full sentences do not respond the same way to paper flashcards that children in 1995 did. This is not a complaint — it is simply the environment therapists are working in. A child who freezes up in traditional drill activities will sometimes engage immediately when the same task appears on a touchscreen with interactive feedback. The medium changes the experience of the activity.
There is also a practical data problem. Tracking a child's progress across articulation goals, vocabulary building, and sentence construction — across weeks and months of sessions — is genuinely difficult to do with paper. Digital tools let therapists log performance data automatically during sessions, generate progress reports, and spot patterns they might otherwise miss across 20 or 30 client files.
The entry of AI specifically accelerates two of these functions: content generation and pattern recognition. A therapist who previously spent 30 minutes preparing custom picture description cards for a specific child's interest area can now prompt an AI tool and generate that material in two minutes. That time difference matters across a full client load.
What apps are therapists actually using in sessions?
The apps that have found consistent use in real therapy settings tend to fall into a few practical categories, not the ambitious ones that get press coverage. Interactive flashcard apps — like those that allow a child to press an image and hear the word spoken back — replace the physical card stack while adding audio and touch feedback. For a child working on word recognition or basic vocabulary, the difference is less about the technology and more about the engagement loop: press, hear, respond, repeat.
Matching games have moved almost entirely to digital platforms for similar reasons. A child who is bored with sorting physical cards will stay engaged longer with a digital matching task, especially when the app provides visual confirmation — something lighting up, a sound effect, a score updating. These are not sophisticated AI applications. They are basic interaction design serving a real session need.
What is important to understand is how therapists use these tools. They are not replacing physical activity — they are alternating with it. A session might move: physical activity, then a digital game, then a real conversation task, then back to a physical exercise. The screen is one component in a rotation designed to prevent the fatigue that comes from any single type of task. Used this way, the concern about screen addiction does not apply — the device is in the therapist's hands, on for 5-minute segments, as a deliberate teaching tool, not passive entertainment.
What is AAC — and which children need it?
AAC stands for Augmentative and Alternative Communication. The name is more technical than the concept: it is any tool, device, or system that helps someone communicate when their natural speech is not sufficient for their needs. The "augmentative" part means it supplements whatever speech the person has. The "alternative" part means it stands in for speech when there is none.
The children who most clearly benefit from AAC are those who are non-verbal or have very limited verbal vocabulary — this includes many children with autism spectrum disorder, cerebral palsy, childhood apraxia of speech, and certain hearing impairments. The core goal is reducing communication frustration. A non-verbal child who cannot ask for water, cannot say they are in pain, cannot express a preference — that child is operating under a constant communication barrier that affects behavior, learning, and emotional regulation. AAC removes some of that barrier.
In practice, AAC ranges from very simple to highly sophisticated. At the low-tech end: a paper grid of symbols a child can point to. At the high-tech end: tablet-based apps with hundreds of vocabulary items, text-to-speech output, and predictive symbol selection. The most advanced systems for children with severe motor impairments — such as quadriplegia from cerebral palsy — use eye-gaze technology. The child moves their eyes across a screen; the system tracks where they are looking and navigates the interface on their behalf. The AAC device then speaks for them.
Best AAC Apps Compared: Which One Is Right for Your Child?
Not all AAC apps are built the same. The right choice depends on the child's age, motor ability, verbal level, and the specific condition being supported. Here is a straightforward comparison of the most widely used options in 2025–2026, based on what speech therapists are actually recommending:
| App | Best For | AI Feature | Platform | Price |
|---|---|---|---|---|
| Proloquo2Go | Non-verbal autism, ALS, cerebral palsy | Predictive symbol selection, learns user patterns | iOS only | $249.99 one-time |
| TouchChat HD | Children 3+ with limited verbal, apraxia | Word prediction, customisable vocabulary boards | iOS, Android | $149.99 one-time |
| Snap Core First | School-age children, switch/eye-gaze access | Eye-gaze compatibility, activity-based vocabulary | iOS, Windows | Subscription ~$35/month |
| Cough Drop | Budget-conscious families, beginners to AAC | Basic symbol boards, cloud sync across devices | iOS, Android, Web | Free (basic) / $15/month Pro |
| Tobii Dynavox | Severe motor impairment, quadriplegia | Advanced eye-gaze tracking, AI-powered navigation | Dedicated device | $4,000–$15,000 (insurance covered in many countries) |
| LetMeTalk | Autism, low-cost entry point | ARASAAC symbol library, offline use | Android only | Free |
This is the application where the gap between "what AI can do" and "what a human therapist can do" most clearly does not matter — because the question is simply whether the child has a way to communicate. An AAC device answering that question is not competing with a therapist. It is completing a task that nothing else could do.
What role does AI specifically play?
It helps to separate AI's role in speech therapy into three distinct functions, because they operate at different levels of the clinical workflow and have different implications.
Progress tracking and pattern recognition. AI-powered therapy platforms can monitor a child's performance across sessions — tracking accuracy rates on specific phonemes, recording response latency, flagging goals where progress has plateaued. A therapist reviewing this data can identify patterns that would take much longer to notice manually. This is not replacing clinical judgment — it is giving the therapist better information to apply that judgment to.
Therapy material generation. This is currently where the practical value is highest. Using general AI tools — ChatGPT, Google Gemini — therapists can create customized worksheets, picture scenes, story prompts, and vocabulary lists in minutes rather than hours. The personalization matters: a child working on the /r/ sound who loves dinosaurs gets /r/-heavy sentences about dinosaurs. That specificity improves engagement and therefore improves practice frequency.
AAC interface intelligence. Modern high-tech AAC systems are increasingly incorporating predictive selection — the system learns which symbols a specific child uses most frequently and prioritizes those in the interface layout. This reduces the physical or visual scanning time required to find a commonly needed word. Over months of use, the system adapts to the individual user rather than presenting a static vocabulary grid. This is a practical and measurable benefit for children who rely on AAC as their primary communication method.
Will AI replace speech therapists?
The answer is no — but it is worth understanding why, because the reasoning matters for how you think about AI replacing professionals more broadly. This is not a case of "AI can't do it yet but will eventually." It is a case of AI being structurally the wrong tool for a core part of the work.
Speech therapy for children is not primarily a data processing task. It is a relational task. A child who is anxious about attempting a difficult sound — who has failed repeatedly in front of family members and felt the embarrassment of that — needs someone who can read that anxiety in the moment, adjust the session accordingly, keep the stakes low enough that the child feels safe attempting something they associate with failure. That requires a human being who can feel the emotional temperature of a room and respond to it in real time. AI cannot do this, not because of a capability gap that will close with more training data, but because it is not the kind of thing that emerges from processing more data.
There is also the moment that defines successful therapy: a child produces a sound correctly for the first time, or strings together a sentence they have never managed before, or uses their AAC device to ask for something independently. The therapist who has worked with that child for months — who knows what it cost them to get there — can respond to that moment in a way that matters to the child. That response is part of the therapy. It shapes the child's understanding of what they are capable of and their willingness to keep trying. AI cannot provide it.
What AI is doing — and this is useful — is handling the parts of the work that do not require that human presence: generating materials, tracking data, managing scheduling, producing reports. This is the same dynamic playing out across every professional field where AI is making inroads — it takes on the execution tasks that previously consumed time, freeing up the human professional for the work only they can do.
My Take
The thing most coverage of AI in healthcare misses is the difference between the tasks that are being automated and the tasks that actually define the profession. Speech therapy is a clear example. The parts AI is helping with — generating practice materials, tracking session data, making AAC interfaces smarter — were never the core of what a good SLP does. They were the administrative and preparation layer around it. When those tasks get easier, a therapist with a heavy caseload can spend more of their limited time on the actual work. That is a straightforward productivity gain, and it is real.
What I find more interesting is the AAC angle, because that is not really about therapist productivity. It is about access. A non-verbal child in a rural area whose family cannot afford regular SLP sessions, using a well-designed AI-enhanced AAC app, has access to a communication tool that simply did not exist 15 years ago. The question of whether that replaces a therapist is the wrong question — the therapist was never there to be replaced. The relevant comparison is between a child who can communicate and a child who cannot. This is the same pattern visible in AI's most impactful applications broadly — the biggest gains come not from replacing existing services but from reaching people who had no access to those services at all.
The honest caveat is around the AI speech analysis tools that are being marketed to parents directly. The pitch — "our app listens to your child and tells you if they have an articulation problem" — sounds useful but is currently ahead of the actual capability. Children's speech is acoustically variable in ways that challenge even trained human listeners. An app that flags a typical 3-year-old's /r/ production as disordered is not helping anyone. This category needs clearer expectations and more conservative claims before it deserves mainstream confidence.
The broader takeaway: the fields where AI will genuinely help in the near term are not the ones where AI replaces the skilled professional — they are the ones where the skilled professional has been spending too much time on tasks that do not require their specific skills. Speech therapy is a good example of that. The same logic applies to AI agents taking over repetitive professional workflows — the value is in what the human can now do with the time that gets freed up.
Key Takeaways
- AI in speech therapy is most useful for material generation and session data tracking — not clinical assessment.
- AAC devices use AI to serve non-verbal children who have no alternative communication method — this is the highest-impact application.
- Apps in sessions work best as one component in a rotation, not as a replacement for physical and conversational activity.
- AI cannot replace the relational and emotional core of speech therapy — adjusting to a child's anxiety, celebrating their first correct production, reading a room.
- General-purpose AI tools (ChatGPT, Gemini) are already useful for therapists creating personalized practice materials.
- AI-powered speech analysis tools for consumers are not yet reliable enough for clinical use — treat marketing claims with scepticism.
Frequently Asked Questions
Can AI apps replace in-person speech therapy sessions entirely?
No. AI apps can supplement practice between sessions and help non-verbal children communicate, but the assessment, goal-setting, and relational aspects of therapy require a trained human therapist. Think of apps as extending the work done in sessions, not replacing it.
What is the best AAC app for a non-verbal child with autism?
Proloquo2Go and TouchChat are among the most widely used by therapists. However, AAC selection should involve an SLP who knows the specific child — vocabulary needs, motor ability, and cognitive level all affect which system will work. A generic recommendation can be a poor fit.
Can parents use AI tools to practice speech therapy at home?
Yes, within limits. Using an AI tool to generate practice sentences targeting a specific sound your child's SLP is working on is a practical way to extend practice. What parents should avoid is using AI tools to self-diagnose, set goals, or decide whether a child needs therapy — those decisions need a qualified therapist.
Is screen time in therapy sessions harmful for children?
The concern about screen addiction applies to passive, unsupervised use — not to structured therapeutic use. A therapist using an iPad for a 5-minute interactive task within a session is not the same as a child watching YouTube unsupervised. The device is a tool in the therapist's hands, not autonomous entertainment.
How does eye-gaze AAC technology work?
An eye-gaze system uses a camera and infrared sensors to track where the user is looking on a screen. The software maps gaze position to vocabulary items or navigation buttons. After a set dwell time — typically 0.5 to 1.5 seconds of sustained gaze — the system treats that as a selection. This allows completely non-ambulatory individuals to navigate complex AAC interfaces using only eye movement.
Which AI tools do speech therapists actually use for session planning?
Most therapists using AI for planning are using general-purpose tools — primarily ChatGPT and Google Gemini — to generate custom therapy materials. Specialized SLP platforms like Speechify and Therapy Brands are adding AI features for documentation and scheduling. The honest picture is that the most-used AI tools in the profession are not speech-specific at all.
Final Thought
This article has not covered the full scope of what AI speech tools will look like in five years. That is the honest gap. The acoustic analysis space is moving quickly, and the current limitations — real as they are — will not all persist. What is less likely to change is the relational core of the work: the therapist who notices a child shutting down before they say anything, who makes the session feel safe enough to try again. That is not a capability waiting to emerge from a larger training run. It is a different kind of intelligence entirely. The tools will keep improving. That part of the work stays human.
Sources: American Speech-Language-Hearing Association (ASHA) · International Society for AAC (ISAAC) · NIDCD Speech/Language Statistics
0 Comments