Non-text content Screen reader compatibility

Accessibility
Dec 12, 2023

Screen reader compatibility for applets and timed media, showing how failures and techniques work in different screen reader / browser combinations.

The results include two types of test:

  • Expected to work - these tests show support when accessibility features are used correctly
  • Expected to fail - these tests show what happens when accessibility features are used incorrectly (marked with Expected to Fail)

WCAG 2.0 1.1.1 Controls, Input: If non-text content is a control or accepts user input, then it has a name that describes its purpose. (Refer to Guideline 4.1 for additional requirements for controls and content that accepts user input.) Time-Based Media: If non-text content is time-based media, then text alternatives at least provide descriptive identification of the non-text content.

Reliability by user agent

The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes.

An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.

ComboVersionsReliabilityTest Changes
JAWS ChromeJAWS 2023.2311.34 with Chrome 120
JAWS EdgeJAWS 2023.2311.34 with Edge 120
JAWS FirefoxJAWS 2023.2311.34 with FF 115
6 better
JAWS IEJAWS 2019.1912.1 with IE11
4 better
NVDA ChromeNVDA 2023.3 with Chrome 120
NVDA EdgeNVDA 2023.3 with Edge 120
NVDA FirefoxNVDA 2023.3 with FF 115
10 better
NVDA IENVDA 2019.2 with IE11
VoiceOver MacVoiceOver macOS 13.6 with Safari 16.6
10 better
VoiceOver iOSVoiceOver iOS 16.6 with Safari iOS 16.6
6 better
WindowEyes IEWindowEyes 9.2 with IE11
1 better 1 worse
Dolphin IEDolphin SR 15.05 with IE11
SaToGo IESaToGo 3.4.96.0 with IE11
Average Including older versions

The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).

Reliability trend

100%80%60%40%20%0%201521%201625%201729%201841%201941%202054%202156%202260%202360%

Expected to work

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
Should work. Fails in 1% - 25% applet inside figure with figcaption elementGoodGoodGoodGoodBetterBetterGoodGood
Should work. Fails in 1% - 25% applet with aria-label attributeGoodBetterGoodGoodBetterGoodBetterGood
Should work. Fails in 26% - 50% applet with aria-labelledby attributeGoodBetterGoodGoodBetterGoodBetterGood
Should work. Fails in 26% - 50% applet with title attributeGoodBetterGoodGoodBetterGoodBetterGood
Should work. Fails in 1% - 25% applet with fallback contentGoodBetterGoodGoodBetterGoodBetterGood
Should work. Fails in 76% - 100% audio with aria-label attributeBadBadBadBadBadBadBadBad
Should work. Fails in 76% - 100% audio with aria-labelledby attributeBadBadBadBadBadBadBadBad
Should work. Fails in 76% - 100% audio with title attributeBadBadBadBadBadBadBadBad
Should work. Fails in 1% - 25% embed inside figure with figcaptionGoodGoodGoodGoodBetterGoodGoodGood
Should work. Fails in 51% - 75% embed with aria-label attributeBadBetterBadBadBadBadBetterBetter
Should work. Fails in 51% - 75% embed with aria-labelledby attributeBadBetterBadBadBadBadBetterBetter
Should work. Fails in 76% - 100% embed with title attributeBadBetterBadBadBadBadBetterBetter
Should work. Fails in 26% - 50% object with aria-label attributeGoodBetterGoodGoodBetterGoodBetterBetter
Should work. Fails in 51% - 75% object with aria-labelledby attributeGoodBetterGoodGoodBetterGoodBetterBetter
Should work. Fails in 51% - 75% object with title attributeGoodBetterGoodGoodBetterGoodBetterBetter
Should work. Fails in 26% - 50% object with fallback contentGoodBetterGoodBadBetterBadBetterGood
Should work. Fails in 51% - 75% video with aria-label attributeBadBetterBadBadBadBadBetterBetter
Should work. Fails in 76% - 100% video with aria-labelledby attributeBadBetterBadBadBadBadBetterBetter
Should work. Fails in 51% - 75% video with title attributeBadBetterBadBadBadBadBetterBetter

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
Should fail. Fails in 26% - 50% applet with alt attributeGoodBetterGoodGoodBetterGoodBetterGood
Should fail. Fails in 26% - 50% applet with no descriptionGoodBetterGoodGoodBetterGoodBetterGood
Should fail. Fails in 76% - 100% audio with fallback contentBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% audio with no descriptionBadBadBadBadBadBadBetterBad
Should fail. Fails in 76% - 100% embed with alt attributeBadBadBadBadBadBadBetterBetter
Should fail. Fails in 76% - 100% embed with no descriptionBadBadBadBadBadBadBetterBad
Should fail. Fails in 51% - 75% object with alt attributeGoodBetterBetterBadBetterBadBetterBetter
Should fail. Fails in 51% - 75% object with no descriptionGoodBetterBetterBadBetterBadBetterBetter
Should fail. Fails in 76% - 100% video with fallback contentBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% video with no descriptionBadBadBadBadBadBadBadBad

Key

Tests expected to fail (due to authoring errors) are marked with Expected to Fail.

  • Works in all Works in 100% of tested screen readers
  • 75% to 99% Fails in 1% - 25% of tested screen readers
  • 50% to 74% Fails in 26% - 50% of tested screen readers
  • 25% to 49% Fails in 51% - 75% of tested screen readers
  • 0% to 24% Fails in 76% - 100% of tested screen readers
  • Stable Stable - works, or doesn't cause problems, in all versions of a specific combination of screen reader and browser
  • Better Better - works, or doesn't cause problems, in the most recent version of a specific combination of screen reader and browser (improvement)
  • Worse Worse - causes problems in the most recent version of a specific combination of screen reader and browser, but used to work in older versions (regression)
  • Broken Broken - causes problems in all versions of a specific combination of screen reader and browser

Test notes

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

  • Reading Content read using the “read next” command in a screen reader
  • Tabbing Content read using the “tab” key in a screen reader
  • Heading Content read using the “next heading” key in a screen reader
  • Touch Content read when touching an area of screen on a mobile device

In the “What the user hears” column:

  • Commas represent short pauses in screen reader voicing
  • Full Stops represent places where voicing stops, and the “read next” or “tab” or “next heading” command is pressed again
  • Ellipsis … represent a long pause in voicing
  • (Brackets) represent voicing that requires a keystroke to hear