[Skip navigation links]
Login

CSS and text content Screen reader compatibility

Last updated: April 7, 2019

Screen reader compatibility for CSS and text, showing how failures and techniques work in specific screen reader / browser combinations.

The results include two types of test:

Reliability by user agent

The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes. An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.

ComboVersionsReliability
JAWS IEJAWS 2018.1811.2 with IE1150%
JAWS FirefoxJAWS 2018.1811.2 with FF6050%
NVDA IENVDA 2018.4 with IE1150%
NVDA FirefoxNVDA 2018.4 with FF6050%
VoiceOver MacVoiceOver macOS 10.13 with Safari 12.1 
VoiceOver iOSVoiceOver iOS 11.4 with Safari iOS 11.450%
WindowEyes IEWindowEyes 9.2 with IE11 
Dolphin IEDolphin SR 15.05 with IE11 
SaToGo IESaToGo 3.4.96.0 with IE11 
Average Including older versions 63%

The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).

Reliability trend

100%80%60%40%20%0%20140%20150%20160%20170%201850%

Works as expected

These tests use conformant HTML or WCAG sufficient techniques, and work in all tested browser / screen reader combinations.

Screen ReaderNVDAJAWSVoiceOverWin-EyesDolphinSaToGo
BrowserIEFFIEFFMaciOSIEIEIE
Should work. Works in 100% Page with lang set on the HTML and P elementsGoodGoodGoodGoodGoodGood

Expected to work

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOverWin-EyesDolphinSaToGo
BrowserIEFFIEFFMaciOSIEIEIE
Should work. Fails in 51% - 75% Page with xml:lang set on the HTML and P elementsBadBadBadBadBadBad

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOverWin-EyesDolphinSaToGo
BrowserIEFFIEFFMaciOSIEIEIE
Should fail. Fails in 51% - 75% CSS content: propertyBadGoodBadBetterGoodGoodBadBadBad
Should fail. Fails in 76% - 100% CSS stylesheet link with `media=aural`BadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% CSS stylesheet link with `media=speech`BadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% CSS stylesheet media queries with `@media speech`BadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% CSS stylesheet with `@media aural`BadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% Definition lists with images as bulletsBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% Look alike unicode charsBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% Space separated tablesBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% Space separated wordsBadBadBadBadBadBadBadBadBad

Key

Tests expected to fail (due to authoring errors) are marked with Expected to Fail.

Test notes

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

In the «What the user hears» column: