Your Complete Guide to SLP Evaluation & Treatment Planning

From referral to discharge — learn how to conduct evaluations, write reports, set SMART goals, and document treatment plans.

By Benjamin Thompson, M.S., CCC‑SLPReviewed by SLP Editoral TeamUpdated May 11, 202631 min read

At a Glance

  • A comprehensive speech-language evaluation includes case history, standardized testing, informal measures, and real-life functional analysis.
  • SMART goals with measurable criteria drive every session note, re-evaluation decision, and insurance authorization after diagnosis.
  • Eligibility criteria for speech-language impairment vary significantly by state, making cross-state comparison essential for school-based SLPs.
  • Correct CPT coding and payer-specific documentation standards determine whether SLP evaluations are authorized, reimbursed, or denied.

The quality of a speech-language evaluation determines everything that follows. A thorough assessment produces specific, defensible goals and a clear therapy trajectory. A weak one produces vague objectives, slow progress, and documentation that falls apart under insurance review or IEP scrutiny.

That tension between "good enough" and truly comprehensive is one every clinician faces, whether working under IDEA's 60-day evaluation timeline in a school or racing a payer's authorization window in an outpatient clinic. The stakes are the same across settings: evaluation precision directly shapes treatment outcomes, reimbursement success, and client satisfaction.

Clinicians who build strong habits around report writing, measurable goal development, progress monitoring, and CPT-coded documentation consistently see better client outcomes and fewer denied claims. This guide walks through each stage of the process, from selecting the right slp assessment tools to writing goals grounded in evidence-based speech therapy techniques, so you can approach your evaluations with confidence from day one.

What Is Included in a Speech-Language Evaluation?

A speech-language evaluation is far more than a single test score. It is a comprehensive process designed to capture how a person communicates across real-life contexts, identify the nature and severity of any disorder, and lay the groundwork for an effective treatment plan. ASHA's Preferred Practice Patterns for the Profession of Speech-Language Pathology, originally approved in 2004 and still current, align the evaluation process with the World Health Organization's International Classification of Functioning, Disability and Health framework.1 That means clinicians are expected to consider not just impairment-level data but also how communication difficulties affect participation in daily activities.

Core Components of the Evaluation

While every evaluation is tailored to the individual, most follow a consistent structure that includes several key components:

  • Case history review: The clinician gathers background information from medical records, educational reports, caregiver interviews, and patient self-reports. This step helps identify risk factors, developmental milestones, prior diagnoses, and the concerns that prompted the referral.
  • Oral-motor examination: A physical assessment of the structures and function of the lips, tongue, jaw, palate, and teeth helps rule out or confirm motor-based speech disorders.
  • Standardized (formal) testing: Norm-referenced instruments provide scores that compare an individual's performance against age-matched peers, supporting diagnostic decisions and eligibility determinations.
  • Informal measures: Language samples, dynamic assessment, narrative retell tasks, and observation-based checklists reveal how a person actually uses communication in context.
  • Hearing screening: Because hearing loss can mimic or compound speech-language difficulties, a screening is typically included or recent audiological results are reviewed.
  • Clinical observations: Noting eye contact, turn-taking, attention, frustration tolerance, and self-correction patterns adds depth that no single test can capture.

Why Both Formal and Informal Assessment Matter

Standardized test scores are valuable, but they have well-documented limitations. A child might score within the average range on a vocabulary test yet struggle to hold a conversation with peers. An adult recovering from a stroke might perform poorly on a timed naming task yet communicate effectively using compensatory strategies. ASHA's Scope of Practice in Speech-Language Pathology (2016) explicitly recommends the use of multiple data sources2, and ASHA's Medical Review Guidelines reinforce that clinicians should avoid basing a diagnosis on a single standardized score.3 Informal measures fill the gaps by showing how communication functions in everyday situations, which is essential for writing goals that actually improve quality of life. For a deeper look at the specific instruments clinicians choose across disorder areas, see our guide to speech language pathology assessment tools.

How Setting and Population Shape the Evaluation

No two evaluations look exactly alike, because the clinical questions differ across settings and populations.

In school-based practice, an evaluation often centers on whether a student's communication difficulties adversely affect educational performance. Clinicians may emphasize curriculum-based language sampling and classroom observations alongside standardized testing.

In outpatient clinics, the evaluation might focus on a broader range of functional outcomes. A preschooler referred for articulation concerns will undergo a very different battery than an adult being assessed for aphasia following a neurological event.

In hospital or acute-care settings, evaluations tend to be more targeted and time-sensitive. A bedside swallowing screen or a brief cognitive-linguistic assessment may take priority, with a more comprehensive evaluation scheduled once the patient stabilizes.

Fluency evaluations, voice evaluations, and augmentative and alternative communication assessments each bring their own specialized tools and observation protocols. Recognizing which components to include, and which to prioritize, is one of the core clinical reasoning skills you will develop throughout your graduate training. Grounding those decisions in evidence-based practice in speech-language pathology will strengthen your clinical rationale from the start.

The Bottom Line for Students

If you are preparing for clinical practicum or studying for your certification exams, remember that a thorough evaluation draws on converging evidence from multiple sources. ASHA's guidance is clear: no single data point tells the whole story.1 Building the habit of triangulating formal scores, informal observations, and client or caregiver input will make your evaluations stronger and your treatment plans more effective from day one.

The Referral-to-Discharge Workflow: Step by Step

Every speech-language pathology case follows a predictable lifecycle, from the initial referral through discharge. While this workflow applies across clinical settings, timelines vary. School-based SLPs typically follow IDEA's 60-day evaluation timeline, whereas medical settings operate within payer authorization windows that may be shorter or require periodic renewal.

Nine-stage SLP clinical workflow from referral through screening, evaluation, eligibility, treatment planning, intervention, progress monitoring, re-evaluation, and discharge

Questions to Ask Yourself

Standardized scores alone can miss how a client actually communicates day to day. Adding a language sample, dynamic assessment, or structured observation gives you a fuller picture of strengths and needs, especially for clients from culturally and linguistically diverse backgrounds.

If your form hasn't been revised in the past two years, it may lack questions about home language use, interpreting needs, or dialect considerations. Outdated forms risk incomplete histories that lead to misidentification or missed diagnoses.

Scores tell referral sources and families how a client performed relative to peers, but they don't explain how the impairment affects classroom participation, social relationships, or daily routines. Documenting functional impact strengthens eligibility decisions and justifies services to insurance payers.

Evaluation data that stays disconnected from a client's daily context makes it harder to write meaningful goals and harder for families to understand why therapy matters. Grounding findings in specific situations, like ordering at a restaurant or following multi-step classroom directions, bridges that gap.

How to Write an SLP Evaluation Report

A well-written evaluation report does more than document test scores. It tells the story of a client's communication profile, connects formal and informal findings to real-life function, and provides the clinical rationale that drives eligibility decisions, insurance authorizations, and treatment planning. Whether you are a graduate student drafting your first report or a working clinician refining your documentation skills, a consistent structure and reader-centered writing style will make your reports more effective.

Section-by-Section Report Template

Most SLP evaluation reports follow a predictable framework. Organizing yours around these sections keeps information easy to locate for every reader, from parents to insurance reviewers.

  • Identifying information: Client name, date of birth, date of evaluation, referral source, and the clinician's credentials.
  • Reason for referral: A concise statement explaining why the evaluation was requested and by whom.
  • Background and history: Relevant developmental, medical, educational, and social history gathered from caregiver interviews, medical records, and prior reports.
  • Assessment procedures: A list of all formal tests, informal measures, observations, and language samples administered during the evaluation.
  • Results by domain: Findings organized by communication area (e.g., articulation, receptive language, expressive language, fluency, voice, pragmatics, feeding and swallowing) with both quantitative data and qualitative descriptions.
  • Clinical impressions: A synthesis of all results into a cohesive profile, including severity ratings and functional impact statements.
  • Recommendations: Specific therapy recommendations, referrals, frequency and duration of services, and any accommodations or home strategies.

This structure works across settings, whether you practice in schools, hospitals, private clinics, or early intervention programs.

Write for the Reader, Not the Clinician

Parents, teachers, and case managers are among the most frequent readers of your reports, and none of them need a paragraph copied from a test manual explaining what the Goldman-Fristoe measures. Instead, translate scores into functional language. Rather than writing "Client scored a standard score of 72 on the CELF-5 Receptive Language Index," consider something like: "Her receptive language skills fall well below expectations for her age. In practical terms, she is likely to have difficulty following multi-step classroom directions and understanding grade-level reading passages."

Every score you report should be paired with a plain-language explanation of what it means for the client's daily communication. This approach respects the reader's time and builds trust with families who may feel overwhelmed by clinical terminology.

Documenting Severity, Functional Impact, and Recommendations

Severity ratings and functional impact statements are critical for two audiences: school teams determining eligibility and insurance reviewers authorizing services. Use consistent descriptors (mild, moderate, severe, profound) and tie them directly to observable limitations. For example, stating that a client's severe expressive language disorder results in frequent communication breakdowns during peer interactions and limits participation in classroom discussions gives decision-makers the context they need.

Your recommendations section should be specific enough to justify the services you are requesting. Instead of writing "speech therapy is recommended," specify the frequency (e.g., two 30-minute sessions per week), the targeted domains, and the service delivery model. This level of detail supports IEP eligibility arguments and satisfies the medical necessity criteria that most insurance payers require. Understanding the full SLP scope of practice can also help you articulate which domains fall within your professional authority when crafting recommendations.

Avoiding Common Report-Writing Pitfalls

Even experienced clinicians fall into patterns that weaken their reports. Watch for these common missteps:

  • Omitting informal data such as language sample analysis, play-based observations, or caregiver-reported concerns. Standardized scores alone rarely capture the full picture, and informal data often provides the strongest evidence of functional impact.
  • Copy-pasting test descriptions without interpretation. Listing subtest scores in a table is not the same as analyzing what those scores mean together. Readers need you to connect the dots.
  • Failing to link results to functional communication needs. A report that ends with scores but never explains how those scores affect the client's ability to participate in school, work, or social life misses the point of the evaluation entirely.
  • Using vague or boilerplate recommendation language that could apply to any client. Individualized, specific recommendations demonstrate clinical reasoning and strengthen your case for services.

Think of the evaluation report as the foundation for everything that follows: treatment goals, progress monitoring, re-evaluation decisions, and discharge planning. Selecting the right slp assessment tools at the outset ensures you have the data to build a thorough, reader-friendly document that saves time down the road and leads to better outcomes for the clients you serve.

Common Assessment Tools and When to Use Them

Selecting the right assessment tool is one of the most consequential decisions you will make during an evaluation. The tool needs to match the suspected disorder, the client's age, and the linguistic and cultural context of the evaluation. Below is a domain-by-domain guide to widely used instruments, along with guidance on multilingual considerations and digital platforms for telepractice. For a deeper dive into instrument selection by disorder area, explore our list of speech and language assessments.

Articulation and Phonology

  • Goldman-Fristoe Test of Articulation, Third Edition (GFTA-3): Normed for ages 2 through 21, this is often the first choice when you suspect an articulation disorder because it offers a straightforward single-word format and strong normative data across a wide age span.1
  • Khan-Lewis Phonological Analysis, Third Edition (KLPA-3): Designed to be used alongside the GFTA-3, this tool identifies phonological process patterns. Choose it when errors appear systematic rather than limited to individual sounds.

Receptive and Expressive Language

  • Clinical Evaluation of Language Fundamentals, Fifth Edition (CELF-5): Ages 5 through 21. This is probably the most commonly administered language battery in school and clinical settings. It covers semantics, morphology, syntax, and pragmatics through a modular structure that lets you tailor subtests to the referral concern.1
  • Preschool Language Scales, Fifth Edition (PLS-5): Birth through age 7, making it the go-to option for early intervention and preschool populations. It separates auditory comprehension from expressive communication.
  • Oral and Written Language Scales, Second Edition (OWLS-II): Ages 3 through 21. Select this when written language is part of the referral question, since it captures listening comprehension, oral expression, and reading and writing in a single battery.

Fluency, Voice, and Pragmatics

  • Stuttering Severity Instrument, Fourth Edition (SSI-4): Ages 2 through adult. It quantifies frequency, duration, and physical concomitants of stuttering, providing a severity rating that is useful for eligibility decisions and baseline measurement.
  • Voice assessment typically relies on a combination of perceptual scales (such as the CAPE-V) and instrumental measures rather than a single standardized test. Choose perceptual rating when instrumentation is unavailable.
  • Comprehensive Assessment of Spoken Language, Second Edition (CASL-2): Ages 3 through 21. Its pragmatics subtests are particularly helpful when social communication is the primary concern, offering normed data on pragmatic judgment and inferencing that informal checklists cannot provide.

Adult Neurogenic Communication Disorders

  • Western Aphasia Battery, Revised (WAB-R): Adults. This is the standard for classifying aphasia type and severity. It generates an Aphasia Quotient that is widely recognized across rehabilitation settings.
  • Boston Diagnostic Aphasia Examination, Third Edition (BDAE-3): Adults. Choose the BDAE-3 when you need a more detailed profile of language breakdown, especially for research purposes or complex differential diagnoses.

Multilingual and Culturally Responsive Assessment

Standardized norms developed on monolingual English speakers can misidentify a language difference as a disorder. Several tools address this gap directly, and clinicians interested in expanding their competencies in this area may want to learn how to become a bilingual speech pathologist.

  • Bilingual English-Spanish Assessment (BESA): Normed for bilingual children ages 4 through 9. It evaluates morphosyntax, semantics, and phonology in both English and Spanish, making it far more accurate than translating an English-only test.1
  • i-Talk(i): Designed for children ages 3 through 8 with suspected speech sound disorders, this tool accounts for cross-linguistic phonological patterns.3
  • CELF Spanish: Ages 5 through 21, providing a Spanish-language evaluation of core language skills.1
  • WAB-R Spanish: An adapted version of the WAB-R for Spanish-speaking adults with aphasia.1

When no normed tool exists in a client's language, dynamic assessment (a test-teach-retest model) and interpreter-mediated evaluation are the recommended alternatives. Dynamic assessment measures a client's ability to learn rather than what they already know, which helps separate limited English exposure from a true language disorder.

Digital Platforms for Telepractice

SLP telepractice has made digital assessment platforms essential rather than optional. Several options support remote administration.

  • Q-global: A web-based scoring and reporting platform from Pearson that covers assessments from birth through adulthood across speech-language domains.1
  • Q-interactive: Pearson's iPad-based administration platform, allowing clinicians to present digital stimulus books and capture responses in real time.1
  • SALT (Systematic Analysis of Language Transcripts): A digital tool for analyzing language samples across syntax, semantics, and pragmatics. It is especially valuable when standardized tests do not capture functional communication.2
  • SLP Now: A digital platform spanning preschool through high school that supports articulation and language assessment and therapy planning.3
  • Net Health SLP Toolkit: Covers cognition and language across all ages and is geared toward clinical and rehabilitation settings.4

The best evaluations rarely depend on a single instrument. Pairing standardized scores with language samples, dynamic assessment, and clinical observation gives you the fullest picture of a client's communication profile.

Developing Measurable Treatment Goals (SMART Goals)

The evaluation report lays the groundwork, but it is the treatment goals that drive every session, every data point, and every conversation with families, teachers, and insurance reviewers. Vague objectives such as "improve articulation" or "increase language skills" give clinicians nothing concrete to measure and give payers no reason to authorize continued services. The SMART framework solves that problem by building accountability into every goal from day one.

The SMART Framework Applied to SLP

Each goal you write should pass through five filters before it lands in a treatment plan or IEP.

  • Specific: Identify the exact target. Name the sound, grammatical structure, fluency strategy, or functional communication skill. "Produce /r/ in the final position of words" is specific; "work on speech sounds" is not.
  • Measurable: Attach a metric. Percentage accuracy across a set number of trials, frequency counts per conversational sample, or a clinician rating scale all work, as long as someone other than the treating SLP could collect the same data and arrive at a comparable result.
  • Achievable: Anchor the target to the client's current baseline. If a child produced /r/ correctly in 20% of word-final contexts during the evaluation, aiming for 80% in structured tasks within a marking period is ambitious but realistic. Jumping straight to 90% accuracy in conversation is not.
  • Relevant: Connect the goal to a functional communication need. A fluency target matters more when it is tied to the client's ability to participate in classroom discussions or order independently at a restaurant.
  • Time-bound: State a clear review period, whether that is a grading quarter, a 90-day insurance authorization cycle, or a scheduled re-evaluation date.

Sample SMART Goals Across Disorder Types

Concrete examples help illustrate how the framework adapts to different clinical populations.

  • Articulation: The student will produce /r/ in all word positions during a five-minute conversational sample with 80% accuracy, as measured by clinician transcription, within one semester.
  • Expressive language: The student will produce complex sentences containing at least one subordinate clause during narrative retell tasks with 75% accuracy across three consecutive data sessions, within 12 weeks.
  • Fluency: The client will independently use a stuttering modification strategy (pull-out or cancellation) in at least 60% of disfluent moments during structured conversation, as documented by clinician tally, within one grading quarter.
  • Adult aphasia: The client will verbally produce functional two-word phrases (e.g., requesting items, greeting communication partners) with 70% accuracy across 20 trials in a clinical session, within 90 days of initiating treatment.

Long-Term Goals, Short-Term Objectives, and Discharge Benchmarks

Think of the long-term goal as the destination and each short-term objective as a milestone along the route. A long-term articulation goal might target accurate /r/ production in spontaneous conversation. Short-term objectives scaffold that journey: first in isolation, then in syllables, then in structured words, then in sentences, and finally in connected speech. Each objective should specify its own accuracy criterion and timeline so that progress, or the lack of it, is visible at every stage.

Discharge criteria should be established at the outset. When the client meets the long-term goal and maintains it across settings and communication partners, services are no longer warranted. Documenting that endpoint early prevents therapy from drifting without a clear conclusion.

Writing IEP Goals That Serve the Classroom

School-based clinicians face an extra layer of complexity. IEP goals must be educationally relevant, not simply clinically accurate. A perfectly written SMART goal for /s/ production still falls short if it does not explain how that target connects to the student's ability to participate in class, follow curriculum-based instructions, or interact with peers.

Practical tip: tie every IEP goal to a classroom activity or academic demand. Instead of writing "will produce /s/ clusters with 80% accuracy," try "will produce /s/ clusters with 80% accuracy during oral reading and class discussion, as measured by teacher and clinician observation across three data collection sessions." This language reinforces that speech therapy in a school setting exists to support access to education, and it gives the IEP team a shared, observable benchmark.

Goal writing improves when it is grounded in evidence-based speech therapy techniques and informed by the data collected during the initial evaluation. Selecting the right standardized language assessments ensures your baselines are defensible and your targets are clinically justified. Well-crafted goals are not just a documentation requirement. They are the clinical compass that keeps treatment focused, measurable, and defensible at every stage from the first session through discharge.

An evaluation without measurable goals is simply a description of the problem. Goals are the bridge between diagnosis and progress. Every session note, re-evaluation, and discharge decision traces back to whether those goals were specific enough to measure. If goals lack clarity, the entire treatment process loses direction, and documenting meaningful outcomes becomes nearly impossible.

Eligibility Criteria: How States Differ on Speech-Language Impairment

One of the most challenging aspects of school-based SLP practice is navigating eligibility criteria that vary significantly from state to state. While the Individuals with Disabilities Education Act (IDEA) establishes the federal requirement that a speech-language impairment must have an adverse educational impact, each state interprets that standard through its own regulatory lens.1 Understanding these differences is essential, especially if you plan to practice across state lines or serve diverse populations.

The Spectrum of Eligibility Models

State approaches generally fall into three categories: quantitative (relying on standardized score thresholds), clinical judgment (relying on professional expertise and functional evidence), and hybrid models that blend both. Where a state falls on this spectrum directly affects which students qualify for services.

  • Quantitative models: California uses strict thresholds, requiring scores at or below 1.5 standard deviations below the mean on two or more standardized language tests. Clinical judgment is permitted for speech sound disorders but not as a standalone pathway for language eligibility.2
  • Clinical judgment models: New York does not mandate a fixed statewide score cutoff. While scores below 1.5 standard deviations often guide decisions, the IEP team retains broad discretion. Colorado similarly relies on clinical judgment with quantitative support, with no fixed statewide threshold required.3
  • Hybrid models: Texas and Illinois both require language scores at or below 1.5 standard deviations on two or more tests, but allow clinical judgment for speech sound eligibility. Virginia requires both quantitative data and clinical judgment, using severity rating scales for speech and a 1.5 standard deviation threshold for language.1

Notable Variations in Score Thresholds

Not all quantitative states draw the line in the same place. Ohio, for example, allows two pathways: a student may qualify with scores at or below 2.0 standard deviations on at least one test combined with clinical judgment, or with scores at or below 1.5 standard deviations on two or more tests.1 Florida mirrors the 1.5 standard deviation threshold for language but uses a different measure for speech, qualifying students who demonstrate a 20 percent or greater error rate or meet clinical severity scales.

These differences mean a student who qualifies in one state might not meet the threshold in another, even with identical test scores.

The Tension: Discrepancy Models vs. Clinical Judgment

The most consequential debate in eligibility policy centers on whether standardized test scores alone should drive decisions. Discrepancy models, which rely heavily on fixed cutoffs, can systematically disadvantage two groups:

  • Multilingual students, whose performance on English-normed standardized tests may not accurately reflect their true language abilities
  • Students who score within normal limits on standardized assessments but demonstrate clear functional impairments in the classroom

Several states have responded to this concern with recent policy updates. Clinicians working with multilingual populations will find that understanding these equity-focused changes is increasingly important, and those interested in serving these communities can explore pathways to becoming a bilingual speech pathologist. Colorado's 2025 guidance from the Colorado Department of Education adds an explicit equity focus for multilingual learners.3 Illinois emphasized culturally sensitive assessments in its 2024 rules. Texas now requires Response to Intervention (RTI) data as part of the eligibility process, and New York's 2024 guidance reinforces RTI prereferral before formal evaluation.1

Recent Policy Changes Worth Watching

Eligibility frameworks are not static, and several states have made meaningful updates in the past two years:

  • Florida's Rule 6A-6.03023, updated in 2025, integrates Multi-Tiered System of Supports (MTSS) into the eligibility process.
  • Ohio's 2025 update incorporates telepractice evaluations, reflecting post-pandemic shifts in service delivery.1
  • Virginia's 2024 regulations emphasize progress monitoring and explicitly exclude transient issues from eligibility.1
  • California's Voice Options Program, running from 2026 through 2029, expands eligibility for augmentative and alternative communication (AAC) services.5

What This Means for Your Practice

Regardless of which state you practice in, remember that IDEA requires evidence of adverse educational impact. A low test score alone does not guarantee eligibility, and a score within normal limits does not automatically disqualify a student. The strongest evaluations document functional classroom performance alongside standardized data, using reliable slp assessment tools to give the IEP team a complete picture.

If you are preparing for school-based practice, familiarize yourself with the specific eligibility criteria in your state. Resources from ASHA and your state's department of education are the most reliable starting points. Program comparisons can also help you identify graduate programs with strong school-based clinical training, preparing you to navigate these differences with confidence from your first clinical fellowship forward.

Progress Monitoring and Re-Evaluation Protocols

Effective treatment planning does not end when therapy begins. Systematic progress monitoring tells you whether a client is responding to intervention, whether goals need adjustment, and when it is time to discharge or refer. The specific protocols you follow will depend on whether you practice in a school, medical, or SLP private practice setting, but the underlying principle is the same: let data drive your decisions.

Progress Monitoring Methods

Several data collection approaches are commonly used in speech-language pathology, and most clinicians rely on a combination rather than a single method.

  • Trial-by-trial data: Every response during a structured activity is recorded, giving you a high-resolution picture of performance on a specific target. This works well for articulation drills and language production tasks.
  • Probe-based data: Brief, standardized probes are administered at set intervals (for example, every fourth session) to measure generalization outside of practiced items. Probes reduce the risk of confusing rehearsed accuracy with true skill acquisition.
  • Curriculum-based measures: In school settings, tracking a student's performance on grade-level literacy or language benchmarks helps connect therapy targets to academic outcomes.
  • Standardized re-assessment: Readministering a norm-referenced test at appropriate intervals provides a formal comparison to the original baseline.
  • Goal attainment scaling (GAS): Each goal is assigned a five-point outcome scale ranging from much less than expected to much more than expected. GAS is especially useful for complex or functional goals that resist simple percentage scoring.

Re-Evaluation in School Settings Under IDEA

The Individuals with Disabilities Education Act (IDEA) requires a re-evaluation at least once every three years for students receiving special education services. However, a full battery of standardized tests is not always necessary. The IEP team may conduct a review of existing data, including progress monitoring records, classroom performance, and teacher input, and determine that existing information is sufficient. A full re-evaluation is required when the team lacks enough data to determine continued eligibility or when there is reason to believe the student's needs have changed significantly. Parents and the school district can also agree in writing that a triennial re-evaluation is unnecessary, though this waiver should be carefully considered. When a full re-evaluation is warranted, choosing the right instruments is critical; our guide to the best slp assessment tools can help you match tests to the student's profile.

Re-Evaluation in Medical and Private Practice Settings

Outside of schools, re-evaluation timelines are typically driven by insurance payers rather than federal education law. Many payers require updated documentation of medical necessity every six to twelve months, and some require a formal re-evaluation before authorizing additional treatment sessions. These re-evaluations should document current functional status, progress toward established goals, and a clear rationale for continued therapy. Failing to meet payer documentation timelines is one of the most common reasons for claim denials.

Practical Data Tracking Habits

Good data habits prevent premature goal changes and support defensible clinical decisions. Grounding your monitoring approach in evidence-based speech therapy techniques ensures the methods you choose are both valid and efficient. As a general guideline, collect a minimum of three to five data points per goal before drawing conclusions about a client's trajectory. If accuracy on a target has plateaued over several consecutive sessions, consider whether the barrier is the treatment approach, the goal itself, or an underlying factor that warrants referral to another professional.

  • If the client is progressing but slowly, adjust the intensity or dosage of practice before abandoning the approach.
  • If accuracy is consistently below 20 to 30 percent after a reasonable trial period, reconsider the goal's developmental appropriateness or the teaching strategy.
  • If progress stalls across multiple goal areas simultaneously, a referral for audiological, neuropsychological, or medical evaluation may be warranted.

Consistent data collection also strengthens your documentation for insurance reviews, IEP meetings, and interdisciplinary consultations. Whether you use a paper tally sheet, a digital tracking app, or a spreadsheet, the key is recording data in real time rather than relying on post-session memory.

Insurance Documentation and CPT Coding for SLP Evaluations

Thorough documentation does more than satisfy clinical standards. It also determines whether your services get authorized, reimbursed, and defended if audited. Understanding the billing codes tied to speech-language evaluations and the documentation standards each payer expects is a core competency for every clinician, whether you work in a hospital, private practice, or school-based setting that also bills Medicaid.

CPT Codes for SLP Evaluations

Four Current Procedural Terminology (CPT) codes cover the speech-language evaluation domain. For the most accurate and up-to-date descriptions, always consult the current edition of the CPT manual published by the American Medical Association (AMA) or their website, because code descriptors can be revised in annual updates.

  • 92521: Evaluation of speech fluency (e.g., stuttering, cluttering).
  • 92522: Evaluation of speech sound production (e.g., articulation, phonological processes).
  • 92523: Evaluation of speech sound production with evaluation of language comprehension and expression.
  • 92524: Behavioral and qualitative analysis of voice and resonance.

These codes rarely undergo major structural changes from year to year, but it is good practice to verify them each January when the new CPT edition takes effect. Selecting the wrong code, or bundling codes inappropriately, can trigger claim denials or requests for additional documentation.

Medical Necessity Documentation Standards

Payers require that evaluation reports clearly establish medical necessity before they approve ongoing treatment. The specifics vary by payer, so reviewing the relevant policy documents is essential.

  • Medicare: The CMS Medicare Benefit Policy Manual, Chapter 15, outlines coverage criteria for speech-language pathology services. Local Coverage Determinations (LCDs) issued by Medicare Administrative Contractors add region-specific rules about diagnoses, frequency limits, and supporting evidence. Check the CMS website or your MAC's portal for the latest LCDs.
  • Commercial payers: Each insurer publishes its own medical review policies on its provider portal. These policies spell out what clinical evidence is needed, which slp assessment tools must be reported, and how functional limitations should be described.
  • Medicaid: State Medicaid programs set their own documentation thresholds. Some require physician referrals, while others accept referrals from other qualified professionals.

Regardless of payer, your evaluation report should connect assessment findings to functional deficits, explain why skilled intervention is required (rather than supportive or maintenance-level care), and reference objective data such as standardized test scores, percentage-correct baselines, or criterion-referenced measures. Grounding your clinical reasoning in evidence-based practice SLP examples strengthens both the report and any subsequent appeal.

Authorization and Pre-Certification Tips

Many commercial plans and some Medicaid programs require prior authorization before you begin treatment. Missing this step can result in denied claims even when the services themselves are fully justified.

  • Contact the payer's customer service line or check its provider portal for the latest speech therapy authorization requirements before scheduling the evaluation.
  • Document the authorization number, approved number of visits, and date range in the patient's file.
  • If a payer denies authorization, request a peer-to-peer review. Having clear, measurable baseline data from your evaluation strengthens an appeal.
  • Track authorization expiration dates and submit re-authorization requests well in advance, including updated progress data.

Building payer-compliant documentation habits into your evaluation workflow from the start saves significant time and reduces the risk of revenue loss. The ASHA Practice Portal offers additional guidance on navigating reimbursement across payer types, making it a useful reference as policies evolve.

Frequently Asked Questions About SLP Evaluation and Treatment Planning

Below are answers to some of the most common questions students and early-career clinicians ask about the speech-language pathology evaluation and treatment planning process. Each answer includes a pointer to the section of this guide where you can explore the topic in greater depth.

What is SLP evaluation and treatment?
SLP evaluation is the systematic process a speech-language pathologist uses to identify communication or swallowing disorders through standardized and informal assessments. Treatment is the individualized intervention plan that follows, targeting the specific areas of need uncovered during the evaluation. Together, they form a continuous cycle of assessment, goal setting, intervention, and progress monitoring. For a full overview, see our intro section on what every clinician needs to know.
What is included in a speech-language evaluation?
A comprehensive speech-language evaluation typically includes a case history review, oral mechanism examination, standardized testing, informal measures such as language sampling, and clinical observation. The clinician also gathers input from caregivers, teachers, or other professionals. Results are synthesized into a diagnostic impression and recommendations. The section titled 'What Is Included in a Speech-Language Evaluation?' walks through each component in detail.
How do you write an SLP evaluation report?
An SLP evaluation report should include identifying information, referral reason, background history, assessment results, clinical impressions, and specific recommendations. Use clear, jargon-free language when possible so that families and other team members can understand findings. Each section of the report should connect assessment data to functional communication needs. Our step-by-step guide on writing an SLP evaluation report covers formatting and best practices.
What are SMART goals in speech therapy treatment planning?
SMART goals are Specific, Measurable, Achievable, Relevant, and Time-bound objectives that guide speech therapy intervention. For example, a goal might state that a client will produce the /s/ sound in the initial position of words with 80% accuracy across three consecutive sessions within 12 weeks. Well-written SMART goals keep therapy focused and make progress easy to document. See our section on developing measurable treatment goals for examples and templates.
How often should SLP re-evaluations be conducted?
Re-evaluation timelines depend on the setting and regulatory requirements. In school-based practice, federal law requires re-evaluation at least every three years, though teams can request one sooner. In medical and private practice settings, re-evaluations are often conducted every six to twelve months or when a significant change in status occurs. Our section on progress monitoring and re-evaluation protocols details the schedules and decision points for each setting.
What documentation is required for insurance coverage of speech therapy?
Insurance companies generally require a physician referral or prescription, a diagnostic evaluation report with ICD and CPT codes, a treatment plan with measurable goals, and regular progress notes. Many payers also require prior authorization before treatment begins. Failing to submit complete documentation is one of the most common reasons claims are denied. The section on insurance documentation and CPT coding for SLP evaluations breaks down each requirement.
What assessments are used for bilingual or multilingual clients?
Clinicians working with bilingual or multilingual clients should use dynamic assessment approaches, language sampling in all languages spoken, and culturally and linguistically appropriate standardized tools when available. Assessments such as the Bilingual English-Spanish Assessment (BESA) and the Ortiz PVAT are designed for diverse populations. Interpreters and cultural informants can also support accurate evaluation. Our section on common assessment tools discusses when and how to use these instruments.

Recent Articles