VIGAANALYTICSGet Started
← Back to Blog
8 min read

How to Calculate CCBHC Screening Rates From Your EHR Data

Calculating CCBHC screening rates sounds straightforward: divide the number of patients screened by the number who should have been screened. But anyone who's actually tried to pull this data from an EHR knows it's rarely that simple.

Here's a practical walkthrough of how to get from raw EHR data to an accurate screening rate — and where the process typically breaks down.

Step 1: Define Your Denominator Population

Before you can calculate a rate, you need to know: who should have been screened?

The denominator for most CCBHC screening measures is patients who had a qualifying encounter during the measurement period. "Qualifying encounter" typically means:

  • An in-person or telehealth visit (with appropriate place-of-service codes)
  • With a behavioral health provider or in a CCBHC-designated program
  • During the reporting period (usually a calendar year or demonstration period)

Pull a list of unique patients (not encounters) meeting these criteria. This is your starting denominator. Watch out for these common issues:

  • Duplicate counts: A patient seen 12 times in a year should appear once in your denominator, not 12 times
  • Telehealth codes: If telehealth visits are missing place-of-service codes, they may be excluded from your denominator entirely
  • Multi-site patients: Patients seen at multiple CCBHC locations should still be counted once

Step 2: Identify Completed Screenings

Next, determine which of those denominator patients actually received the screening. This is where EHR-specific challenges emerge.

Epic

In Epic, screening results are typically stored in flowsheets. Look for flowsheet rows corresponding to your screening tools (PHQ-9, AUDIT, Columbia Suicide Severity Rating Scale, etc.). The key fields are:

  • The flowsheet template ID or row ID for the specific screening tool
  • The recorded date (make sure it falls within your measurement period)
  • The score value (must be a discrete numeric value, not free text)

Common Epic pitfall: clinicians sometimes document screening results in SmartPhrases or notes instead of flowsheets. These won't appear in structured data queries.

Oracle Health (Cerner)

In Oracle Health environments, screenings may be captured as clinical events, assessments, or within PowerForms. Look in the clinical_event table for events linked to your screening tool event codes. Ensure you're pulling from the correct event set and that the result values are in the expected format.

Common Oracle Health pitfall: assessment tools configured as free-text rather than discrete result values. If the PHQ-9 total score is stored as a text string ("9") rather than a numeric value, your comparison logic may fail silently.

Netsmart (myAvatar / CareRecords)

Netsmart systems often capture screenings through custom assessment forms. Data may be stored in the assessment tables with form-specific field IDs. Work with your Netsmart administrator to identify the exact form IDs and field mappings for each screening tool.

Common Netsmart pitfall: assessment forms that exist in the system but aren't mapped to reportable fields. The data is captured clinically but isn't accessible through standard reporting queries.

Step 3: Match Screenings to the Denominator

Now join your screening results to your denominator population. For each patient in the denominator, determine whether they have at least one valid screening result during the measurement period.

Key matching rules:

  • Date range: The screening must occur within the measurement period (or within a lookback period if the measure specification allows one)
  • One per patient: Even if a patient was screened five times, they count once in the numerator
  • Valid result: The screening must have a scoreable result — a screening opened but not completed doesn't count

Step 4: Handle Exclusions Correctly

Most screening measures have valid exclusions — patients who should be removed from the denominator entirely. Common exclusions include:

  • Patients who declined the screening (removed from both numerator and denominator)
  • Patients with a medical reason for not being screened
  • Patients in specific age ranges not covered by the measure

Critical distinction: "declined" is different from "not completed." If a patient declines a screening and this is documented with the proper refusal code, they should be excluded from the denominator. If a clinician simply didn't administer the screening, the patient stays in the denominator as unscreened. Getting this wrong inflates or deflates your rate significantly.

Step 5: Calculate and Validate

With your clean numerator and denominator:

Screening Rate = (Patients with valid screening / Eligible patients in denominator) x 100

Before you report this number, validate it:

  • Does the rate make clinical sense? If your depression screening rate is 98%, is that realistic given your workflow, or is something being over-counted?
  • Check a sample: Randomly pull 10-20 patient charts and manually verify that the screening data matches what's in the clinical record
  • Compare across periods: If your rate jumped from 60% to 95% in one quarter, investigate what changed — it could be a legitimate workflow improvement, or a data logic error
  • Look at the excluded population: If a large percentage of patients are being excluded, make sure the exclusion logic isn't too broad

The Reality Check

If this process sounds labor-intensive, that's because it is. Many clinics spend days or weeks each reporting cycle manually extracting, cleaning, and calculating these rates. Each step introduces opportunities for human error, and the process has to be repeated for every screening measure, every reporting period.

This is exactly the kind of repetitive, high-stakes data work that should be automated — not because automation is trendy, but because manual processes at this scale inevitably produce errors that can trigger audit findings.

Free Download: CCBHC Reporting Survival Guide

15 data quality pitfalls that cause audit findings — and how to avoid them.

Download the Free Guide →