CMS 5-Star Rating Algorithm Explained: How Quality Ratings Are Calculated for Skilled Nursing Facilities
A complete technical breakdown of CMS's Five Star Quality Rating System for SNFs — how each of the three domains is calculated, how the composite star rating is derived, and what the data sources and timing mean for facility strategy.
CMS's Five Star Quality Rating System is one of the most consequential algorithms in US healthcare. For the 15,000+ skilled nursing facilities across the country, a star rating determines referrals from hospital discharge planners, drives consumer choice, affects Medicare reimbursement rates under value-based purchasing, and influences survey frequency. Understanding exactly how the algorithm calculates ratings — not just conceptually, but mechanically — is essential for any SNF operator, technology company serving the post-acute care market, or developer building quality analytics for this space.
This is the technical breakdown. Not marketing language about "quality care" — the actual algorithm, the actual data sources, and the actual mathematics.
The Three-Domain Structure
CMS's Five Star rating is a composite score calculated from three independent domains, each of which receives its own star rating:
- Health Inspections (Surveys) — based on deficiencies cited during annual surveys and complaint investigations
- Staffing — based on staffing hours per resident per day from Payroll-Based Journal (PBJ) data
- Quality Measures — based on clinical quality outcomes calculated from MDS 3.0 assessment data
Each domain produces a 1-5 star rating independently. The composite Five Star rating is derived from combining the three domain ratings according to CMS rules — not simply averaging them.
Domain 1: Health Inspections
The Health Inspections domain receives the highest weight in the composite rating calculation. It is also the domain with the most nuanced and complicated calculation.
What the Inspection Rating Measures
CMS collects deficiency data from:
- Annual surveys — typically conducted once per year by state survey agencies on behalf of CMS
- Complaint surveys — investigations triggered by complaints filed against a facility
- Focused infection control surveys — added during the COVID-19 pandemic and continuing
Each deficiency cited during a survey is assigned:
- Scope — isolated (1 resident), pattern (multiple residents), widespread
- Severity — potential for harm (D), actual harm (G), immediate jeopardy (J), and variations
- F-Tag — the specific regulatory requirement violated
The Inspection Score Calculation
CMS converts deficiencies into a weighted point score. The weights increase significantly with severity and scope — immediate jeopardy citations carry point values that can single-handedly drop a facility from 5-star to 1-star.
The scoring matrix (approximate weights by severity-scope combination):
| | Isolated (A-C) | Pattern (D-F) | Widespread (G-I) | |--|----------------|---------------|------------------| | Minimal harm potential | 0 | 0 | 1 | | Potential for harm | 1 | 3 | 9 | | Actual harm | 16 | 48 | 80 | | Immediate jeopardy | 320 | 640 | 1280 |
CMS calculates the inspection score by:
- Summing deficiency points from the three most recent annual surveys
- Adding complaint investigation deficiency points from the previous 3 years
- Applying a recency weighting — more recent surveys count more than older surveys
- Comparing the resulting score to national and state percentile distributions
Facilities in the top 10th percentile (lowest deficiency scores) receive 5 stars for Inspections. Facilities in the bottom 20th percentile receive 1 star. The intermediate thresholds are set to distribute facilities approximately as: 20% receive 1-star, 20% receive 2-star, 20% receive 3-star, 20% receive 4-star, 20% receive 5-star nationally.
Special penalties: Facilities that have received an Immediate Jeopardy (IJ) citation in either of the two most recent annual surveys are automatically capped at 2 stars for the Inspections domain, regardless of their point score.
Inspection Data Timing
CMS updates Inspection star ratings on a monthly basis as new survey data is transmitted from state agencies to CASPER. There is typically a 4-6 week lag between a survey completion and its appearance in Five Star ratings. This lag is important for facilities planning survey response strategies — deficiencies cited in a survey do not immediately affect the rating.
The recency weighting also means that a poor recent survey takes 3 years to fully "age off" the rating — though the weight decreases each year.
Domain 2: Staffing
The Staffing domain uses Payroll-Based Journal (PBJ) data submitted quarterly to CMS. PBJ requires facilities to report actual paid hours by staff category and day — eliminating the self-reported staffing data that existed before 2016.
Staffing Metrics Measured
CMS calculates four staffing metrics from PBJ data:
- Total nursing hours per resident per day (HPRD) — all licensed and non-licensed nursing staff hours divided by total resident-days
- RN hours per resident per day — registered nurse hours only
- Total nursing turnover rate — 12-month rolling nursing turnover percentage
- Administrator turnover rate — 12-month administrator/director of nursing turnover
The Staffing Score Calculation
CMS calculates staffing scores by comparing each facility's staffing levels to the expected staffing levels for facilities with similar patient acuity — measured by the Case Mix Index (CMI) derived from MDS 3.0 data.
This acuity adjustment is critical: a facility serving a medically complex population (high CMI) needs more staff than a facility serving a lower-acuity population. The expected staffing levels are calculated from regression models that estimate appropriate staffing for each facility's specific resident mix.
Adjusted staffing = Observed HPRD / Expected HPRD based on CMI
Facilities with higher adjusted staffing (more staff than expected for their resident mix) receive higher star ratings. CMS sets staffing star thresholds to distribute facilities similarly to the 20/20/20/20/20 percentile pattern.
Automatic downgrades:
- Facilities with RN HPRD = 0 for any reporting quarter are automatically limited to 1 star for the Staffing domain
- Facilities that have not submitted PBJ data receive 1 star for the Staffing domain
- Facilities that appear to have submitted identical PBJ hours across all days (potential data fabrication) are flagged for investigation
PBJ Data Timing
PBJ data is submitted quarterly: Q1 (January-March) data is due August 1, Q2 (April-June) due November 1, Q3 (July-September) due February 1, Q4 (October-December) due May 1.
CMS incorporates new PBJ data into Five Star ratings approximately 2-3 months after submission deadlines. This means staffing decisions made today will affect star ratings 5-8 months from now — a long feedback loop that many facilities underappreciate.
Domain 3: Quality Measures
Quality Measures are the domain most directly tied to clinical care delivery — and the domain where analytics and AI can provide the most value because quality measure outcomes can be calculated from MDS data in near-real-time.
Short-Stay vs. Long-Stay Quality Measures
CMS separates quality measures into two populations with different measure sets:
Short-Stay measures apply to residents with 100 or fewer days in the facility during the measurement period. Key short-stay measures:
- Percentage of short-stay residents who self-report moderate to severe pain (MDS Section J1800/J0300)
- Percentage of short-stay residents who newly received an antipsychotic medication
- Percentage of short-stay residents who were successfully discharged to the community
- Percentage of short-stay residents who made improvements in function
- Percentage of short-stay residents who were re-hospitalized (30-day readmission)
- Percentage of short-stay residents who had an ED visit without hospitalization
Long-Stay measures apply to residents with 101 or more consecutive days. Key long-stay measures:
- Percentage of long-stay residents experiencing one or more falls with major injury
- Percentage of long-stay residents who have or had a catheter inserted and left in their bladder
- Percentage of long-stay residents with high-risk pressure ulcers (Stage 2, 3, or 4 or unstageable)
- Percentage of long-stay residents with low-risk pressure ulcers (Stage 1 or 2, low-risk)
- Percentage of long-stay residents who received an antipsychotic medication (without a qualifying diagnosis)
- Percentage of long-stay residents whose ability to move independently worsened
- Percentage of long-stay residents with a urinary tract infection
- Percentage of long-stay high-risk residents with pressure ulcers
- Percentage of long-stay residents who were physically restrained
- Percentage of long-stay residents with symptoms of depression
- Percentage of long-stay residents who received flu vaccine (October-March)
- Percentage of long-stay residents who received pneumococcal vaccine
Quality Measure Calculation from MDS
Quality measure rates are calculated from the MDS (Minimum Data Set) 3.0 assessment data that facilities submit to CMS. Each measure has a specific algorithm that identifies:
- The denominator (which residents qualify for inclusion in the measure)
- The numerator (which of those residents experienced the measure event)
- Exclusions (which qualifying residents are excluded from the denominator due to clinical reasons)
For example, the falls with major injury measure:
Denominator: All long-stay residents (≥101 days) with a completed OBRA assessment in the measurement period, excluding residents in coma (MDS B0100 = 1)
Numerator: Residents in the denominator where MDS J1900C (fractures) or J1900D (other injuries) is coded 1 or 2
Rate: Numerator / Denominator × 100
Quality Measure Star Rating Calculation
CMS calculates Quality Measure star ratings through percentile comparison:
- Calculate each measure rate for the facility
- Convert measure rates to standardised scores using national percentile distributions (higher performance = lower rate for most measures)
- Average the standardised scores across all short-stay measures and all long-stay measures separately
- Combine the short-stay and long-stay composite scores
- Compare to national thresholds to assign 1-5 stars
Quality Measure data is updated quarterly in Five Star ratings, approximately 3-4 months after MDS submission deadlines.
The Composite Five Star Rating
With three domain star ratings, CMS derives the composite using a specific algorithm — not a simple average:
- Start with the Health Inspections domain star rating as the initial composite rating
- Adjust upward by 1 star if both the Staffing and Quality Measures domains are 4 or 5 stars, AND the Staffing domain is 5 stars
- Adjust upward by 1 star if the Staffing domain is 5 stars AND the Quality Measures domain is 5 stars (different condition from #2)
- Adjust downward by 1 star if the Staffing domain is 1 star
- Adjust downward by 1 star if the Quality Measures domain is 1 star
- Cap at 5 stars (cannot exceed 5 through upward adjustments) and floor at 1 star
This means the Health Inspections domain is the anchor for the composite rating. Excellent staffing and quality measures can add up to 2 stars to the composite, but poor inspections cannot be fully overcome by good performance in other domains.
Example:
- Health Inspections: 3 stars → Initial composite = 3 stars
- Staffing: 5 stars → +1 star (if Quality Measures also ≥ 4 stars)
- Quality Measures: 4 stars → meets condition → Composite = 4 stars
- Final composite: 4 stars
Building a 5-Star Prediction Model
For analytics platforms and quality intelligence tools, predicting CMS star ratings before they are published is valuable because it gives facilities 60-90 days to address issues before they become public. The prediction model components:
For Health Inspections: Survey data is available in CASPER approximately 4-6 weeks after survey completion. You can calculate the inspection score using the deficiency weights and percentile comparison against the current national distribution.
For Staffing: PBJ data submitted by the facility can be used to calculate staffing HPRD and compare it to CMI-adjusted expected staffing levels. The critical dependency is having current CMI data from recent MDS submissions.
For Quality Measures: MDS 3.0 assessments submitted to CMS can be used to calculate quality measure rates in near-real-time. The calculation logic for each measure is published by CMS in the MDS 3.0 Quality Measures Technical Specifications document. Implementing this calculation for all 15+ measures is a significant but achievable engineering project.
The composite rating prediction follows the same algorithm as CMS's published methodology. Our prediction model at Octdaily achieves approximately 87% accuracy within ±0.5 stars — limited primarily by the lag between MDS submission and CMS incorporation, and by state-level percentile distribution shifts.
Implications for SNF Technology
Understanding the 5-Star algorithm at this level of detail has direct implications for SNF technology design:
Real-time quality measure monitoring requires implementing the full MDS-based quality measure calculation logic. This is computable from MDS data without waiting for CMS — enabling facilities to see their quality measure rates updated with every MDS submission.
Survey preparation is optimised by understanding the deficiency weight matrix — knowing which F-Tags carry the highest point weights and focusing preparation on the most material compliance requirements.
Staffing analytics must be CMI-adjusted to give meaningful performance context. Raw HPRD without acuity adjustment is misleading — a high-acuity facility with 4.0 HPRD may be understaffed while a low-acuity facility with 3.5 HPRD may be appropriately staffed.
Composite rating prediction requires combining all three domain models with the CMS composite algorithm — which, as noted above, is not a simple average.
Muhammad Moid Shams is a Lead Software Engineer who built the CMS 5-Star analytics platform at Octdaily, processing quality data for 20,000+ US Skilled Nursing Facilities.