Non-text content Screen reader compatibility

Accessibility
Dec 14, 2025

Screen reader compatibility for applets and timed media, showing how failures and techniques work in different screen reader / browser combinations.

The results include two types of test:

  • Expected to work - these tests show support when accessibility features are used correctly
  • Expected to fail - these tests show what happens when accessibility features are used incorrectly

WCAG 2.0 1.1.1 Controls, Input: If non-text content is a control or accepts user input, then it has a name that describes its purpose. (Refer to Guideline 4.1 for additional requirements for controls and content that accepts user input.) Time-Based Media: If non-text content is time-based media, then text alternatives at least provide descriptive identification of the non-text content.

Reliability by user agent

The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes.

An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.

ComboVersionsReliabilityTest Changes
JAWS ChromeJAWS 2025.2508.120 with Chrome 143
1 better
JAWS EdgeJAWS 2025.2508.120 with Edge 143
1 better
JAWS FirefoxJAWS 2025.2508.120 with FF 140
6 better
JAWS IEJAWS 2019.1912.1 with IE11
4 better
NVDA ChromeNVDA 2025.3 with Chrome 143
4 better
NVDA EdgeNVDA 2025.3 with Edge 143
4 better
NVDA FirefoxNVDA 2025.3 with FF 140
10 better
NVDA IENVDA 2019.2 with IE11
VoiceOver MacVoiceOver macOS 15.7 with Safari 26.0
10 better
VoiceOver iOSVoiceOver iOS 18.6 with Safari iOS 18.6
6 better
WindowEyes IEWindowEyes 9.2 with IE11
1 better 1 worse
Dolphin IEDolphin SR 15.05 with IE11
SaToGo IESaToGo 3.4.96.0 with IE11
Average Including older versions

The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).

Reliability trend

100%80%60%40%20%0%201521%201625%201729%201841%201941%202054%202156%202260%202360%202468%202571%

Expected to work

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
95%applet inside figure with figcaption element
79%applet with aria-label attribute
76%applet with aria-labelledby attribute
70%applet with title attribute
83%applet with fallback content
1%audio with aria-label attribute
0%audio with aria-labelledby attribute
1%audio with title attribute
98%embed inside figure with figcaption
37%embed with aria-label attribute
31%embed with aria-labelledby attribute
27%embed with title attribute
62%object with aria-label attribute
56%object with aria-labelledby attribute
58%object with title attribute
60%object with fallback content
36%video with aria-label attribute
27%video with aria-labelledby attribute
35%video with title attribute

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
30%applet with alt attribute
32%applet with no description
100%audio with fallback content
93%audio with no description
88%embed with alt attribute
93%embed with no description
61%object with alt attribute
63%object with no description
96%video with fallback content
100%video with no description

Key

  • Stable - works, or doesn't cause problems, in all versions of a specific combination of screen reader and browser
  • Better - works, or doesn't cause problems, in the most recent version of a specific combination of screen reader and browser (improvement)
  • Worse - causes problems in the most recent version of a specific combination of screen reader and browser, but used to work in older versions (regression)
  • Broken - causes problems in all versions of a specific combination of screen reader and browser

Test notes

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

  • Reading Content read using the “read next” command in a screen reader
  • Tabbing Content read using the “tab” key in a screen reader
  • Heading Content read using the “next heading” key in a screen reader
  • Touch Content read when touching an area of screen on a mobile device

In the “What the user hears” column:

  • Commas represent short pauses in screen reader voicing
  • Full Stops represent places where voicing stops, and the “read next” or “tab” or “next heading” command is pressed again
  • Ellipsis … represent a long pause in voicing
  • (Brackets) represent voicing that requires a keystroke to hear