Assessment Procedures for Students with Disabilities
This study guide covers Competency 0006 of the MTLE Special Education Core Skills Subtest 2, which falls within Subarea I — Referral, Evaluation, Planning, and Programming. This subarea accounts for 50% of Subtest 2. The content addresses the full assessment cycle for students with disabilities: foundational terminology, legal obligations under IDEA including Child Find, the pre-referral and referral process within an RTI/MTSS framework, eligibility evaluation procedures and timelines, types of formal and informal assessment tools, accommodations and modifications for assessment, assistive technology in assessment contexts, data compilation methods, and the interpretation of assessment results to make eligibility and instructional decisions.
As a Minnesota special educator, you are a key participant in every stage of the assessment process — from identifying students who may need evaluation to using ongoing progress monitoring data to adjust IEP services. Deep knowledge of assessment procedures protects students' legal rights, informs high-quality individualized programming, and ensures that referrals and eligibility decisions are based on comprehensive, multi-source data.
Assessment Terminology
A shared vocabulary for assessment is essential for communicating with evaluation teams, families, administrators, and related service providers. The table below defines key terms that appear throughout evaluation reports, IEP documents, and professional literature.
Assessment Terms Reference Table
| Term | Definition | Example Use |
|---|---|---|
| Standardized Assessment | A test administered under uniform conditions with a predetermined protocol so that results can be compared across individuals or groups | Woodcock-Johnson Tests of Cognitive Abilities administered according to standardized procedures |
| Norm-Referenced Assessment | A test that compares a student's performance to the performance of a normative sample of peers of the same age or grade level | A student scores at the 12th percentile compared to same-age peers on a cognitive battery |
| Criterion-Referenced Assessment | A test that measures a student's performance against a predetermined standard or set of skills rather than against the performance of other students | A student correctly completes 18 of 25 grade-level math facts, indicating 72% mastery of the target skill |
| Curriculum-Based Measurement | A set of brief, standardized, timed probes drawn from a student's curriculum that are administered repeatedly to monitor academic growth over time | Weekly one-minute oral reading fluency probes to track words read correctly per minute across a school year |
| Formative Assessment | Ongoing assessment integrated into the instructional process to monitor student learning and provide feedback that guides instructional adjustments in real time | Exit tickets, observation checklists, and thumbs-up/down comprehension checks during a lesson |
| Summative Assessment | Assessment administered at the end of an instructional period to evaluate cumulative learning and assign a grade or rating | End-of-unit test, final portfolio review, or annual state accountability assessment |
| Diagnostic Assessment | In-depth assessment designed to identify the specific nature and extent of a student's strengths, weaknesses, and skill gaps in a particular academic or developmental domain | A comprehensive psychoeducational battery to determine whether a student has a specific learning disability in reading |
| Screening Assessment | A brief, efficient assessment given to all students to identify those who may be at risk and in need of further evaluation or intervention | Universal literacy screener given to all kindergartners at the start of the year to identify students needing Tier 2 support |
Child Find: Legal Obligation and Implementation
Child Find is a mandate under Part B of the Individuals with Disabilities Education Act requiring all state and local education agencies to identify, locate, and evaluate all children with disabilities who are in need of special education and related services. It is one of the most fundamental legal obligations in special education.
Child Find Requirements
- Age range: Child Find obligations extend from birth through age 21, with separate systems for infants and toddlers under Part C of IDEA and for school-age children under Part B. The transition between Part C early intervention services and Part B preschool special education at age three is a key Child Find coordination point.
- Coordination with early intervention agencies: School districts must coordinate with early intervention programs, state health departments, and other agencies serving young children to ensure that eligible children are identified before they reach school age. This includes developing referral pathways, sharing information within FERPA privacy constraints, and facilitating smooth transitions at age three.
- Public awareness activities: LEAs must conduct outreach to inform the public and the professional community about the availability of special education services. Activities may include brochures in community settings, information on the district website, notices to health care providers and childcare programs, and presentations at community events.
- Referral sources: Anyone may refer a child for a Child Find evaluation — parents, teachers, physicians, childcare providers, or community members. Districts must have a process for receiving and responding to referrals within established timelines.
- Homeless and highly mobile children: IDEA and McKinney-Vento require special attention to identifying and promptly evaluating children who are homeless or who move frequently, since these populations are at elevated risk for disability and may be missed by standard school-based procedures.
Teaching Application: Special educators often serve as informal Child Find coordinators within their schools. When you observe a student who is struggling significantly despite receiving quality general education instruction, document your observations and initiate the school's referral process. Early identification dramatically improves long-term outcomes.
Pre-Referral and Referral: RTI/MTSS Framework
Before a student is referred for a special education evaluation, most districts require evidence that the student has received high-quality, research-based interventions through a multi-tiered system of support and has not made adequate progress despite those interventions. This pre-referral process is formalized through Response to Intervention or Multi-Tiered System of Supports frameworks.
RTI/MTSS Tier Structure
- Tier 1 — Universal Instruction: High-quality, evidence-based core instruction provided to all students in the general education classroom. Universal screenings are conducted three times per year to identify students not making adequate progress. Approximately 80% of students should be successful with Tier 1 instruction alone. Tier 1 data establishes the baseline for comparison when a student does not respond as expected.
- Tier 2 — Targeted Supplemental Intervention: Small-group interventions with increased frequency and explicitness provided to the approximately 15% of students who do not meet benchmarks at Tier 1. Interventions are typically 20 to 30 minutes, three to five days per week, in groups of three to five students. Progress is monitored more frequently — typically every two weeks — to determine whether the student is responding to the intervention.
- Tier 3 — Intensive Individualized Intervention: Highly individualized, intensive support for the approximately 5% of students who do not respond adequately to Tier 2. Instruction is more frequent, longer in duration, and delivered in smaller groups or one-on-one. Sustained lack of response at Tier 3 is a key data point supporting referral for a special education evaluation.
Student Support Teams and Referral Documentation
- Student Support Team: A school-based team — often including the general education teacher, special education teacher, school psychologist, administrator, and family member — that coordinates pre-referral interventions, reviews data, and makes decisions about escalating support or initiating a referral.
- Documentation requirements: Pre-referral documentation must include data demonstrating that the student received research-based intervention at each tier, the duration and frequency of each intervention, progress monitoring data showing rate and level of growth compared to peers, and evidence that the intervention was implemented with fidelity.
- IDEA referral timelines: Once a referral for a special education evaluation is made, IDEA requires the school to either obtain written informed consent to evaluate or notify the parent in writing of its decision not to evaluate within a reasonable time. Most states, including Minnesota, specify that the initial evaluation must be completed within 30 school days of receiving written consent.
- Parent referral rights: Parents have the right to request a special education evaluation at any time, regardless of where the student is in the RTI/MTSS process. The school may not delay or deny an evaluation solely because the student is receiving pre-referral interventions.
Eligibility Evaluation: Components and Procedures
A comprehensive special education evaluation determines whether a student has a disability under IDEA and needs special education services. The evaluation must be thorough, individualized, and conducted by a qualified team using multiple sources of data.
IDEA Evaluation Requirements
- Informed written consent: The school must provide written notice explaining what evaluations will be conducted, who will conduct them, the purpose of each assessment, and the parents' rights in the process. Parents must provide written consent before the evaluation begins. Evaluation may not proceed without consent.
- Evaluation timeline: Under IDEA and Minnesota special education rules, the initial evaluation must be completed within 30 school days of receiving written parental consent. After the evaluation is complete, the IEP team meets to review the results and make eligibility and placement decisions.
- Comprehensive evaluation components: The evaluation must assess all areas of suspected disability, which may include cognitive ability, academic achievement, language and communication, social-emotional and behavioral functioning, adaptive behavior, motor skills, sensory abilities, and health status. No single test can constitute the full evaluation.
- Evaluation team composition: The team must include individuals qualified to administer and interpret each assessment used. Typically, this includes the school psychologist, special education teacher, general education teacher, a district representative, parents, and relevant related service providers such as a speech-language pathologist, occupational therapist, or audiologist.
- Independent Educational Evaluations: If parents disagree with the school's evaluation, they have the right to request an IEE at school expense. The school must either fund the IEE or initiate due process to defend the adequacy of its own evaluation. IEE results must be considered in any subsequent IEP decisions.
- Nondiscriminatory evaluation: Assessments must be administered in the student's native language or other mode of communication, selected to avoid cultural and racial bias, and used only for the purposes for which they are validated. No single criterion — such as an IQ score alone — may determine eligibility.
Assessment Types: Formal, Informal, and Alternative
A comprehensive evaluation draws on multiple types of assessment, each providing a different lens through which to understand the student's strengths and needs. Using multiple assessment types is both an IDEA requirement and a best practice for producing reliable, valid eligibility and instructional decisions.
Assessment Types Comparison
| Type | Advantages | Limitations | Examples |
|---|---|---|---|
| Formal Standardized | Allows comparison to norms; produces standard scores; legally defensible; consistent administration | May not reflect functional skills; can be biased against culturally/linguistically diverse students; single point-in-time snapshot | WISC-V, Woodcock-Johnson, Vineland-3, CELF-5, KeyMath-3 |
| Informal Assessment | Flexible; can be conducted in natural settings; provides context-specific data; useful for instructional planning; quick to administer | Not norm-referenced; results cannot be compared to peers; may lack reliability and validity evidence; more subjective | Teacher checklists, observations, running records, interview, work samples, curriculum-based probes |
| Alternative/Authentic Assessment | Captures real-world skill performance; reflects genuine student abilities; appropriate for students who cannot access standardized formats; values diverse competencies | Time-intensive to collect and score; scoring can be subjective; difficult to compare across students; less widely validated for eligibility purposes | Portfolio of student work, video-recorded performance tasks, community-based observations, task analysis records |
Specific Assessment Tools by Domain
Special educators must know assessment tools commonly used to evaluate the key areas of suspected disability. The following overview covers the most frequently referenced instruments in each domain.
Adaptive Behavior Scales
- Vineland Adaptive Behavior Scales, Third Edition: A norm-referenced, comprehensive rating scale measuring adaptive behavior across communication, daily living skills, socialization, and motor skills domains. Completed through a structured interview with parents or caregivers. The Vineland is widely used for intellectual disability and autism eligibility determinations.
- Adaptive Behavior Assessment System, Third Edition: A rating scale completed by parents, teachers, or the individual assessing the three AAIDD domains — conceptual, social, and practical adaptive skills. Produces standard scores, percentile ranks, and GAC composite scores comparable to intellectual functioning scores.
Developmental Screening Tools
- Ages and Stages Questionnaires: A parent-completed developmental screening tool covering communication, gross motor, fine motor, problem solving, and personal-social development for children from one month to five and a half years. Widely used in early intervention and pediatric settings to identify children needing further evaluation.
- Modified Checklist for Autism in Toddlers, Revised with Follow-Up: A two-stage parent-report screening tool for autism spectrum disorder in children between 16 and 30 months. A positive screen triggers follow-up questions to determine need for referral; it does not diagnose autism but identifies children who warrant a comprehensive evaluation.
Functional Behavior Assessment
A functional behavior assessment is a systematic process used to identify the function — the purpose or motivation — of a student's challenging behavior in order to design an effective behavior intervention plan.
- ABC data collection: Antecedent-Behavior-Consequence recording documents what happens immediately before a behavior, the precise behavior itself, and what happens immediately after. Patterns in ABC data reveal the likely function of the behavior — typically access to attention, access to tangibles or activities, escape from demands, or automatic sensory reinforcement.
- Scatter plots: A grid tracking when and under what conditions a behavior occurs across the school day and week. Scatter plots reveal patterns by time, setting, subject, or activity, narrowing the focus for more intensive observation.
- Indirect FBA methods: Interviews with the student, teachers, parents, and others who know the student well; rating scales such as the Motivation Assessment Scale or FAST. Indirect methods generate hypotheses to be confirmed through direct observation.
- Direct FBA methods: Structured direct observation of the student in the natural setting during times when the behavior is likely to occur. Produces the most valid functional hypothesis because the behavior is observed firsthand in context.
- State-mandated assessment accommodations: Students with disabilities are entitled to appropriate accommodations on state accountability assessments. Accommodations must be documented in the IEP, must be those the student uses routinely in instruction, and must not alter the construct being measured.
Progress Monitoring Tools
- CBM probes: Curriculum-based measurement probes in reading, math, spelling, and written expression are brief, standardized, sensitive-to-growth measures used to track student progress weekly or biweekly. Data is graphed against an aimline connecting baseline performance to the IEP goal; if data points fall below the aimline for three or more consecutive sessions, the intervention is adjusted.
- AIMSweb: A commercially available CBM platform providing standardized probes, online scoring, national norm comparisons, and automated graphing for reading fluency, early literacy, early numeracy, math facts, spelling, and written expression.
- DIBELS Next: Dynamic Indicators of Basic Early Literacy Skills, a standardized set of short-duration fluency measures for phonemic awareness, phonics, fluency, vocabulary, and comprehension skills from kindergarten through eighth grade, widely used for both screening and progress monitoring.
- Portfolio assessment and work samples: Collections of student work gathered over time demonstrating growth, skill development, or achievement of IEP objectives. Portfolios provide qualitative evidence complementing quantitative CBM data and are especially valuable for students with complex or multiple disabilities whose progress is not well captured by standardized probes.
Assessment Accommodations vs. Modifications
IDEA requires that the evaluation and the annual state assessment process be accessible to students with disabilities. Understanding the distinction between accommodations and modifications is essential for IEP team decision-making.
Accommodations: Access Without Altering the Construct
An accommodation is a change in how a test is administered or how a student responds that does not alter what the test measures. Accommodations level the playing field by removing disability-related barriers to demonstrating knowledge.
- Presentation accommodations: Read-aloud of directions or test items, large print, Braille, simplified language, signed presentation. Used for students with visual impairments, reading disabilities, or hearing impairments.
- Response accommodations: Dictation to a scribe, use of a speech-to-text device, marking answers in the test booklet rather than on a separate answer sheet, use of a calculator for non-calculator sections when the goal is not computation fluency.
- Setting accommodations: Testing in a separate, smaller room to reduce distraction; preferential seating; use of a study carrel; reduction of noise and other sensory distractions.
- Timing and scheduling accommodations: Extended time, frequent breaks, administration over multiple sessions, testing at the time of day when the student performs best.
Modifications: Altering the Construct
A modification changes what is being tested or the level of performance expected. Modifications may compromise the validity of the assessment for norm-referenced comparison purposes. Examples include reducing the number of answer choices, simplifying the language of items, or allowing use of notes on a test measuring recall. Modifications should be used cautiously and documented carefully, as they may affect how results are interpreted and reported.
Assistive Technology for Assessment
Assistive technology tools may be essential for a student with a disability to access both instructional and assessment tasks. When AT is part of a student's regular instructional program, it is typically an appropriate assessment accommodation and must be documented in the IEP.
Common AT Assessment Tools
- Screen readers: Software that reads displayed text aloud, including Jaws, NVDA, or VoiceOver. Primarily used by students with visual impairments or blindness, but also by some students with significant reading disabilities. The student hears the text rather than reading it visually.
- Text-to-speech: Software or hardware that reads digital text aloud, such as Read&Write Gold or Kurzweil 3000. Used for students with decoding disabilities, visual impairments, or processing disorders to access grade-level content in text form.
- Word prediction: Software that suggests likely next words as the student types, reducing the physical and cognitive demands of writing. Useful for students with motor impairments, dysgraphia, or language processing difficulties.
- Voice recognition software: Speech-to-text programs such as Dragon Naturally Speaking allow the student to dictate responses verbally, which are then transcribed into text. Appropriate for students with fine motor impairments, dysgraphia, or significant writing disabilities.
- IEP documentation: The specific AT tools used routinely for assessment must be listed in the IEP under supplementary aids and services and in the assessment accommodation section. Using AT only on assessments without using it in daily instruction compromises validity and is not appropriate practice.
Data Compilation Procedures
Special educators collect and compile multiple types of data throughout the Child Find, referral, evaluation, and IEP monitoring processes. Rigorous, systematic data collection supports legally defensible decisions and enables instructional responsiveness.
Data Collection Methods
- Anecdotal notes: Written narrative records of observed student behavior, including the date, setting, behavior described precisely and objectively, and relevant context. Anecdotal notes are valuable for documenting patterns over time and for providing qualitative data to complement quantitative scores.
- Checklists: Structured lists of targeted behaviors or skills that an observer marks as present or absent, emerging, or mastered. Checklists are efficient and allow direct comparison across observation sessions or across raters.
- Task analysis data: A task analysis breaks a complex skill into its component steps. Data are recorded as the student performs each step — independently, with verbal prompt, with physical prompt, or not yet performed. Task analysis data reveal precisely where a breakdown occurs in a skill sequence and track skill acquisition over time.
- Systematic observation: Structured observation using an operational definition of the target behavior, a defined observation period, and a specified recording method such as frequency counting, duration recording, interval recording, or momentary time sampling. Systematic observation data are reliable enough to support eligibility and intervention decisions.
- Progress monitoring graphs: Visual displays of CBM data points over time, with an aimline connecting baseline performance to the IEP goal. Graphs allow the team to quickly identify whether a student is on track, ahead of the aimline, or falling behind and in need of intervention adjustment. Trend lines and slope calculations provide objective evidence of growth rate.
Interpreting Assessment Results
Understanding how to read and explain assessment scores is essential for communicating with families, participating meaningfully in eligibility meetings, and translating data into instructional decisions.
Score Types and Their Meaning
- Standard scores: Derived scores that place a student's performance on a common scale with a mean of 100 and a standard deviation of 15 for most cognitive and achievement batteries. Scores between 85 and 115 fall within one standard deviation of the mean and are considered average range. Scores below 70 are more than two standard deviations below the mean. Standard scores allow direct comparison across different subtests and batteries.
- Percentile ranks: Indicate what percentage of the normative sample scored at or below the student's score. A percentile rank of 25 means the student scored equal to or higher than 25% of same-age peers — not that the student answered 25% of items correctly. Percentile ranks are nonlinear and cannot be averaged.
- Age equivalents and grade equivalents: Descriptive statistics indicating the age or grade level at which the average student earned the same raw score. These scores are frequently misinterpreted — a second-grader with an age equivalent of 4-6 has not mastered four-and-a-half-year-old skills; the student simply scored as low as the average 4.5-year-old on that test. Age and grade equivalents should be used cautiously and explained carefully to families.
- Confidence intervals: A range of scores within which the student's true score is likely to fall, accounting for the measurement error inherent in any test. For example, a confidence interval of 88 to 98 at the 95% confidence level means there is a 95% probability that the student's true cognitive ability score falls within that range. Eligibility and programming decisions should always reference confidence intervals rather than single point scores, which are never perfectly precise.
Using Assessment Data for Decision-Making
Assessment data has no value until it is synthesized, interpreted, and applied to make meaningful decisions about a student's educational program. The final — and most important — step in the assessment cycle is using multiple data sources together to answer the questions that brought the team to the table.
Triangulating Data and Making Decisions
- Triangulating multiple data sources: No single assessment tells the whole story. Valid eligibility and programming decisions require convergent evidence from at least three independent sources — for example, a cognitive battery, teacher observations, and parent report — all pointing toward the same conclusion. When data sources conflict, the team must investigate the discrepancy rather than simply averaging or ignoring it.
- Identifying patterns: Look across all assessment data for patterns by domain, setting, modality, time of day, and type of task. A student who performs poorly on all timed tasks but adequately on untimed work has a different profile than a student who struggles across all conditions. Patterns guide the specificity of IEP goals and the selection of accommodations.
- Making eligibility recommendations: Eligibility under IDEA requires that the student meet the definitional criteria for one or more disability categories AND that the disability adversely affects educational performance AND that the student needs special education and related services as a result. All three criteria must be met. Assessment data is the evidence base for each criterion.
- Planning instruction from assessment data: Present levels of academic achievement and functional performance in the IEP are derived directly from evaluation and progress monitoring data. Each IEP goal should address a specific area of need documented by data, with measurable criteria and a timeline that reflects the student's rate of growth on progress monitoring. Instruction is then designed to address IEP goals, and progress monitoring data guides ongoing instructional adjustments.
Teaching Application: During IEP team meetings, advocate for decisions grounded in data rather than subjective impressions. When the team disagrees about what the data shows, return to the raw data together. Your role as the special educator is to be the team's expert on translating assessment scores into specific, actionable instructional plans that reflect the student's strengths as well as needs.
Key Takeaways
- Child Find is a proactive legal obligation: Districts must actively seek out children with disabilities from birth through age 21, not simply wait for referrals. Public awareness, coordination with early intervention, and clear referral pathways are required.
- RTI/MTSS provides the pre-referral foundation: Evidence of high-quality intervention at Tier 1 and Tier 2 before referral protects against over-identification and ensures students receive needed support even if they do not qualify for special education.
- Informed written consent is required before evaluation: No initial evaluation may proceed without the family's written agreement after receiving full written notice of what will be assessed and why. Parents may also request an evaluation at any time independently of RTI progress.
- Evaluations must be comprehensive and multidisciplinary: A single test cannot constitute a full evaluation. All areas of suspected disability must be assessed using multiple tools and data sources, administered by qualified professionals, and reviewed by a team that includes the family.
- Know your assessment vocabulary: Standard scores, percentile ranks, age equivalents, confidence intervals, norm-referenced versus criterion-referenced, and CBM are terms you will use and explain regularly. Misinterpreting scores — especially grade equivalents — can mislead families and compromise IEP quality.
- Accommodations must not alter the construct being measured: Extended time, read-aloud, and scribe access are common accommodations; they must be routinely used in instruction to be valid on assessments. Modifications change what is being tested and must be used and documented carefully.
- Data triangulation is non-negotiable: Eligibility and programming decisions require convergent evidence from multiple independent sources. Patterns across assessments — not any single score — drive sound educational decision-making.
- Progress monitoring graphs drive ongoing instructional decisions: Aimlines and data trends are the tool for determining whether an intervention is working. Three or more consecutive data points below the aimline signal that the intervention must be adjusted.