Screen reader compatibility
Last updated: October 16, 2021
Shows how different WAI-ARIA attributes behave in commonly used screen readers.
The results include two types of test:
Expected to work - these tests show support when accessibility features are used correctly Expected to fail - these tests show what happens when accessibility features are used incorrectly (marked with ) ARIA support by user agent
ARIA role and attribute support in different screen reader / browser combinations.
Expected failures (marked with
) are not included in the reliability graph.
The solid area in the graph shows percentage of tests that pass
in all tested interaction modes. The cross hatched area shows
partial passes that only work in some interaction modes.
An example of a partial pass is when form labels are read when tabbing,
but ignored in browse mode.
Combo Versions Reliability Test Changes JAWS Chrome JAWS 2021.2107.12 with Chrome 94 83% 1 worse JAWS Edge JAWS 2021.2107.12 with Edge 94 83% 1 worse JAWS Firefox JAWS 2021.2107.12 with FF91 79% 10 better 1 worse JAWS IE JAWS 2019.1912.1 with IE11 86% 15 better NVDA Chrome NVDA 2021.2 with Chrome 94 83% NVDA Edge NVDA 2021.2 with Edge 94 83% NVDA Firefox NVDA 2021.2 with FF91 93% 13 better NVDA IE NVDA 2019.2 with IE11 64% 5 better VoiceOver Mac VoiceOver macOS 11.5 with Safari 15.0 86% 11 better VoiceOver iOS VoiceOver iOS 14.7 with Safari iOS 14.7 69% 4 better WindowEyes IE WindowEyes 9.2 with IE11 73% 10 better 1 worse Dolphin IE Dolphin SR 15.05 with IE11 52% SaToGo IE SaToGo 188.8.131.52 with IE11 22%
Including older versions
The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver),
while others only have tests for a single version (SaToGo and Dolphin).
ARIA support trend
This graph shows reliability over time for ARIA in NVDA, JAWS and Voiceover. Other screen readers don't have enough historical data yet to plot trends.
100% 80% 60% 40% 20% 0% 2014 68% 2015 68% 2016 69% 2017 72% 2018 77% 2019 78% 2020 83% 2021 83% ARIA roles columnheader
Screen Reader NVDA JAWS V oiceOver Browser IE FF Cr IE FF Cr Mac iOS Reliability when used correctly (86% average) 86% 100% 100% 75% 80% 100% 88% 75% Data table with
ARIA attributes aria-describedby
Tests expected to fail (due to authoring errors) are marked with
Works in 100% of tested screen readers
Fails in 1% - 25% of tested screen readers
Fails in 26% - 50% of tested screen readers
Fails in 51% - 75% of tested screen readers
Fails in 76% - 100% of tested screen readers
Stable - works, or doesn't cause problems, in all versions of a specific combination of screen reader and browser
Better - works, or doesn't cause problems, in the most recent version of a specific combination of screen reader and browser (improvement)
Worse - causes problems in the most recent version of a specific combination of screen reader and browser, but used to work in older versions (regression)
Broken - causes problems in all versions of a specific combination of screen reader and browser
All tests were carried out with screen reader factory settings. JAWS in particular has
a wide variety of settings controlling exactly what gets spoken.
Screen readers allow users to interact in different modes, and can produce very different results in each mode.
The modes used in these tests are:
Reading Content read using the "read next" command in a screen reader
Tabbing Content read using the "tab" key in a screen reader
Heading Content read using the "next heading" key in a screen reader
Touch Content read when touching an area of screen on a mobile device
In the "What the user hears" column:
Commas represent short pauses in screen reader voicing
Full Stops represent places where voicing stops, and the "read next" or "tab" or "next heading" command is pressed again
Ellipsis ... represent a long pause in voicing
(Brackets) represent voicing that requires a keystroke to hear