Non-text content Screen reader compatibility

Accessibility
Nov 26, 2024

Screen reader compatibility for applets and timed media, showing how failures and techniques work in different screen reader / browser combinations.

The results include two types of test:

  • Expected to work - these tests show support when accessibility features are used correctly
  • Expected to fail - these tests show what happens when accessibility features are used incorrectly

WCAG 2.0 1.1.1 Controls, Input: If non-text content is a control or accepts user input, then it has a name that describes its purpose. (Refer to Guideline 4.1 for additional requirements for controls and content that accepts user input.) Time-Based Media: If non-text content is time-based media, then text alternatives at least provide descriptive identification of the non-text content.

Reliability by user agent

The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes.

An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.

ComboVersionsReliabilityTest Changes
JAWS ChromeJAWS 2024.2409.2 with Chrome 131
1 better
JAWS EdgeJAWS 2024.2409.2 with Edge 131
1 better
JAWS FirefoxJAWS 2024.2409.2 with FF 128
6 better
JAWS IEJAWS 2019.1912.1 with IE11
4 better
NVDA ChromeNVDA 2024.4 with Chrome 131
1 better
NVDA EdgeNVDA 2024.4 with Edge 131
4 better
NVDA FirefoxNVDA 2024.4 with FF 128
10 better
NVDA IENVDA 2019.2 with IE11
VoiceOver MacVoiceOver macOS 14.6 with Safari 17.6
10 better
VoiceOver iOSVoiceOver iOS 17.7 with Safari iOS 17.7
6 better
WindowEyes IEWindowEyes 9.2 with IE11
1 better 1 worse
Dolphin IEDolphin SR 15.05 with IE11
SaToGo IESaToGo 3.4.96.0 with IE11
Average Including older versions

The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).

Reliability trend

100%80%60%40%20%0%201521%201625%201729%201841%201941%202054%202156%202260%202360%202468%

Expected to work

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
94%applet inside figure with figcaption element
77%applet with aria-label attribute
74%applet with aria-labelledby attribute
67%applet with title attribute
81%applet with fallback content
1%audio with aria-label attribute
0%audio with aria-labelledby attribute
1%audio with title attribute
98%embed inside figure with figcaption
35%embed with aria-label attribute
28%embed with aria-labelledby attribute
23%embed with title attribute
58%object with aria-label attribute
52%object with aria-labelledby attribute
54%object with title attribute
56%object with fallback content
33%video with aria-label attribute
23%video with aria-labelledby attribute
32%video with title attribute

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
33%applet with alt attribute
35%applet with no description
100%audio with fallback content
94%audio with no description
89%embed with alt attribute
94%embed with no description
64%object with alt attribute
67%object with no description
95%video with fallback content
100%video with no description

Key

  • Stable - works, or doesn't cause problems, in all versions of a specific combination of screen reader and browser
  • Better - works, or doesn't cause problems, in the most recent version of a specific combination of screen reader and browser (improvement)
  • Worse - causes problems in the most recent version of a specific combination of screen reader and browser, but used to work in older versions (regression)
  • Broken - causes problems in all versions of a specific combination of screen reader and browser

Test notes

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

  • Reading Content read using the “read next” command in a screen reader
  • Tabbing Content read using the “tab” key in a screen reader
  • Heading Content read using the “next heading” key in a screen reader
  • Touch Content read when touching an area of screen on a mobile device

In the “What the user hears” column:

  • Commas represent short pauses in screen reader voicing
  • Full Stops represent places where voicing stops, and the “read next” or “tab” or “next heading” command is pressed again
  • Ellipsis … represent a long pause in voicing
  • (Brackets) represent voicing that requires a keystroke to hear