[Skip navigation links]
Login

Non-text content Screen reader compatibility

Last updated: October 16, 2021

Screen reader compatibility for applets and timed media, showing how failures and techniques work in different screen reader / browser combinations.

The results include two types of test:

WCAG 2.0 1.1.1 Controls, Input: If non-text content is a control or accepts user input, then it has a name that describes its purpose. (Refer to Guideline 4.1 for additional requirements for controls and content that accepts user input.) Time-Based Media: If non-text content is time-based media, then text alternatives at least provide descriptive identification of the non-text content.

Reliability by user agent

The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes. An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.

ComboVersionsReliabilityTest Changes
JAWS ChromeJAWS 2021.2107.12 with Chrome 9442%
JAWS EdgeJAWS 2021.2107.12 with Edge 9442%
JAWS FirefoxJAWS 2021.2107.12 with FF9150%6 better
JAWS IEJAWS 2019.1912.1 with IE1142%4 better
NVDA ChromeNVDA 2021.2 with Chrome 9450%
NVDA EdgeNVDA 2021.2 with Edge 9450%
NVDA FirefoxNVDA 2021.2 with FF9183%10 better
NVDA IENVDA 2019.2 with IE1133%
VoiceOver MacVoiceOver macOS 11.5 with Safari 15.083%10 better
VoiceOver iOSVoiceOver iOS 14.7 with Safari iOS 14.750%2 better
WindowEyes IEWindowEyes 9.2 with IE1117%1 better 1 worse
Dolphin IEDolphin SR 15.05 with IE110%
SaToGo IESaToGo 3.4.96.0 with IE110%
Average Including older versions 35%

The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).

Reliability trend

100%80%60%40%20%0%201521%201625%201729%201841%201941%202054%202156%

Expected to work

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOver
BrowserIEFFCrIEFFCrMaciOS
Should work. Fails in 1% - 25% applet inside figure with figcaption elementGoodGoodGoodGoodBetterBetterGoodGood
Should work. Fails in 26% - 50% applet with aria-label attributeGoodBetterGoodBetterBetterGoodBetterGood
Should work. Fails in 26% - 50% applet with aria-labelledby attributeBadBetterGoodBetterBetterGoodBetterGood
Should work. Fails in 26% - 50% applet with title attributeBadBetterGoodBadBetterGoodBetterGood
Should work. Fails in 26% - 50% applet with fallback contentBadBetterGoodGoodBetterGoodBetterGood
Should work. Fails in 76% - 100% audio with aria-label attributeBadBadBadBadBadBadBadBad
Should work. Fails in 76% - 100% audio with aria-labelledby attributeBadBadBadBadBadBadBadBad
Should work. Fails in 76% - 100% audio with title attributeBadBadBadBadBadBadBadBad
Should work. Fails in 1% - 25% embed inside figure with figcaptionGoodGoodGoodGoodBetterGoodGoodGood
Should work. Fails in 51% - 75% embed with aria-label attributeGoodBetterBadBetterBadBadBetterBad
Should work. Fails in 76% - 100% embed with aria-labelledby attributeBadBetterBadBetterBadBadBetterBad
Should work. Fails in 76% - 100% embed with title attributeBadBetterBadBadBadBadBetterBad
Should work. Fails in 51% - 75% object with aria-label attributeGoodBetterGoodBadBetterGoodBetterBad
Should work. Fails in 51% - 75% object with aria-labelledby attributeBadBetterGoodBadBetterGoodBetterBad
Should work. Fails in 51% - 75% object with title attributeBadBetterGoodBadBetterGoodBetterBad
Should work. Fails in 51% - 75% object with fallback contentBadBetterGoodBadBetterBadBetterGood
Should work. Fails in 51% - 75% video with aria-label attributeGoodBetterBadBadBadBadBetterBetter
Should work. Fails in 76% - 100% video with aria-labelledby attributeBadBetterBadBadBadBadBetterBetter
Should work. Fails in 51% - 75% video with title attributeGoodBetterBadBadBadBadBetterBetter

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOver
BrowserIEFFCrIEFFCrMaciOS
Should fail. Fails in 26% - 50% applet with alt attributeBadBetterGoodBadBetterGoodBetterGood
Should fail. Fails in 26% - 50% applet with no descriptionBadBetterGoodBadBetterGoodBetterGood
Should fail. Fails in 76% - 100% audio with fallback contentBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% audio with no descriptionBadBadBadBadBadBadBetterBad
Should fail. Fails in 76% - 100% embed with alt attributeBadBadBadBadBadBadBetterBad
Should fail. Fails in 76% - 100% embed with no descriptionBadBadBadBadBadBadBetterBad
Should fail. Fails in 76% - 100% object with alt attributeBadBetterBetterBadBetterBadBetterBad
Should fail. Fails in 76% - 100% object with no descriptionBadBetterBetterBadBetterBadBetterBad
Should fail. Fails in 76% - 100% video with fallback contentBetterBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% video with no descriptionBadBadBadBadBadBadBadBad

Key

Tests expected to fail (due to authoring errors) are marked with Expected to Fail.

Test notes

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

In the "What the user hears" column: