Non-text content Screen reader compatibility

Last updated: September 5, 2016

Screen reader compatibility for applets and timed media, showing how failures and techniques work in different screen reader / browser combinations.

The results include two types of test:

WCAG 2.0 1.1.1 Controls, Input: If non-text content is a control or accepts user input, then it has a name that describes its purpose. (Refer to Guideline 4.1 for additional requirements for controls and content that accepts user input.) Time-Based Media: If non-text content is time-based media, then text alternatives at least provide descriptive identification of the non-text content.

Sometimes works

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOverWin-EyesDolphinSaToGo
BrowserIEFFIEFFMaciOSIEIEIE
Should work. Fails in 1% - 25% APPLET inside FIGURE with FIGCAPTION elementGoodGoodGoodWorseGoodGoodGoodGoodGood
Should work. Fails in 76% - 100% APPLET with ARIA-LABEL attributeGoodWorseBetterBadBadBadBadBadBad
Should work. Fails in 76% - 100% APPLET with ARIA-LABELLEDBY attributeBadWorseBetterBadBadBadBadBad
Should work. Fails in 26% - 50% APPLET with fallback contentBadBetterGoodBetterBadGoodGoodBadBad
Should work. Fails in 76% - 100% AUDIO with ARIA-LABEL attributeBadBadBadBadBadBadBadBadBad
Should work. Fails in 76% - 100% AUDIO with ARIA-LABELLEDBY attributeBadBadBadBadBadBadBadBad
Should work. Fails in 1% - 25% EMBED inside FIGURE with FIGCAPTIONGoodGoodGoodWorseGoodGoodGoodGoodGood
Should work. Fails in 76% - 100% EMBED with ARIA-LABEL attributeGoodWorseBetterBetterBadBadBadBadBad
Should work. Fails in 76% - 100% EMBED with ARIA-LABELLEDBY attributeBadWorseBetterBetterBadBadBadBad
Should work. Fails in 76% - 100% OBJECT with ARIA-LABEL attributeGoodBadBadBadBadBadWorseBadBad
Should work. Fails in 76% - 100% OBJECT with ARIA-LABELLEDBY attributeBadBadBadBadBadBadBadBad
Should work. Fails in 51% - 75% OBJECT with fallback contentBadBadBadBadBetterGoodBetterBadBad
Should work. Fails in 76% - 100% VIDEO with ARIA-LABEL attributeGoodBadBadBadBadBadBadBadBad
Should work. Fails in 76% - 100% VIDEO with ARIA-LABELLEDBY attributeBadBadBadBadBadBadBadBad

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOverWin-EyesDolphinSaToGo
BrowserIEFFIEFFMaciOSIEIEIE
Should fail. Fails in 76% - 100% APPLET with ALT attributeBadBetterBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% APPLET with TITLE attributeBadWorseBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% APPLET with no descriptionBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% AUDIO with TITLE attributeBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% AUDIO with fallback contentBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% AUDIO with no descriptionBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% EMBED with ALT attributeBadBadBadBetterBadBadBadBadBad
Should fail. Fails in 76% - 100% EMBED with TITLE attributeBadWorseBadBetterBadBadBadBadBad
Should fail. Fails in 76% - 100% EMBED with no descriptionBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% OBJECT with ALT attributeBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% OBJECT with TITLE attributeBadBadBadBadBadBadWorseBadBad
Should fail. Fails in 76% - 100% OBJECT with no descriptionBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% VIDEO with TITLE attributeGoodBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% VIDEO with fallback contentBadBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% VIDEO with no descriptionBadBadBadBadBadBadBadBadBad

Key

Tests expected to fail (due to authoring errors) are marked with Expected to Fail.

Test notes

The threshold for inclusion in these results is 5% usage in the most recent WebAIM screen reader survey. Chrome and Android still fall below the 5% threshold.

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

In the «What the user hears» column: