PDFs Screen reader compatibility

Accessibility
Dec 12, 2023

Screen reader compatibility for PDF accessibility features. Built-in browser PDF engines are used where possible. For IE11, the Acrobat Reader plugin is used.

The results include two types of test:

  • Expected to work - these tests show support when accessibility features are used correctly
  • Expected to fail - these tests show what happens when accessibility features are used incorrectly (marked with Expected to Fail)

Reliability by user agent

The solid area in the graph shows percentage of tests that pass in all tested interaction modes. The cross hatched area shows partial passes that only work in some interaction modes.

An example of a partial pass is when form labels are read when tabbing, but ignored in browse mode.

ComboVersionsReliabilityTest Changes
JAWS ChromeJAWS 2023.2311.34 with Chrome 120
JAWS EdgeJAWS 2023.2311.34 with Edge 120
JAWS FirefoxJAWS 2023.2311.34 with FF 115
3 better
JAWS IEJAWS 2019.1912.1 with IE11
2 better
NVDA ChromeNVDA 2023.3 with Chrome 120
NVDA EdgeNVDA 2023.3 with Edge 120
NVDA FirefoxNVDA 2023.3 with FF 115
2 better
NVDA IENVDA 2019.2 with IE11
VoiceOver MacVoiceOver macOS 13.6 with Safari 16.6
2 better
VoiceOver iOSVoiceOver iOS 16.6 with Safari iOS 16.6 
WindowEyes IEWindowEyes 9.2 with IE11 
Dolphin IEDolphin SR 15.05 with IE11 
SaToGo IESaToGo 3.4.96.0 with IE11 
Average Including older versions

The average includes all versions, but some browser/AT combinations have tests for multiple versions (NVDA / JAWS / VoiceOver), while others only have tests for a single version (SaToGo and Dolphin).

Reliability trend

100%80%60%40%20%0%201520162017201838%201950%202034%202134%202239%202336%

Expected to work

These tests use conformant HTML or WCAG sufficient techniques and might be expected to work in screen readers. This doesn't always happen.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
Should work. Fails in 26% - 50% PDF1 Image with alt textGoodBadBadGoodBadBadGoodGood
Should work. Fails in 76% - 100% PDF16 Document default language set to FrenchBadBetterBadBadBetterBadBadBad
Should work. Fails in 26% - 50% PDF18 Document with doc titleGoodGoodGoodGoodGoodGoodBadBad
Should work. Fails in 51% - 75% PDF19 Phrase language set to GermanBadBadBadBadBadBadGoodGood
Should work. Fails in 26% - 50% PDF4 Decorative image marked as artifactBadGoodBadBadBetterBadGoodGood
Should work. Fails in 51% - 75% PDF6 Table with header markupBadBadBadBadBadBadBetterGood
Should work. Fails in 51% - 75% PDF6 Table with header markup and alt textBadBadBadBadBadBadBetterGood
Should work. Fails in 51% - 75% PDF9 Document with headingsBadBetterBadBadBetterBadGoodGood

Expected to fail

These tests use non-conformant HTML or WCAG failures and are expected to fail in screen readers.

Screen ReaderNVDAJAWSVoiceOver
BrowserEdgeFFCrEdgeFFCrMaciOS
Should fail. Fails in 76% - 100% PDF image with blank (single space) alt textBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% PDF image without alt textBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% PDF no headingsBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% PDF table without header markupBadBadBadBadBadBadBadBad
Should fail. Fails in 76% - 100% PDF untaggedBadBadBadBadBadBadBadBad
Should fail. Fails in 51% - 75% PDF with doc securityBadGoodBadBadGoodBadGoodBetter
Should fail. Fails in 76% - 100% PDF with no doc titleBadBadBadBadBadBadBadBad

Key

Tests expected to fail (due to authoring errors) are marked with Expected to Fail.

  • Works in all Works in 100% of tested screen readers
  • 75% to 99% Fails in 1% - 25% of tested screen readers
  • 50% to 74% Fails in 26% - 50% of tested screen readers
  • 25% to 49% Fails in 51% - 75% of tested screen readers
  • 0% to 24% Fails in 76% - 100% of tested screen readers
  • Stable Stable - works, or doesn't cause problems, in all versions of a specific combination of screen reader and browser
  • Better Better - works, or doesn't cause problems, in the most recent version of a specific combination of screen reader and browser (improvement)
  • Worse Worse - causes problems in the most recent version of a specific combination of screen reader and browser, but used to work in older versions (regression)
  • Broken Broken - causes problems in all versions of a specific combination of screen reader and browser

Test notes

All tests were carried out with screen reader factory settings. JAWS in particular has a wide variety of settings controlling exactly what gets spoken.

Screen readers allow users to interact in different modes, and can produce very different results in each mode. The modes used in these tests are:

  • Reading Content read using the “read next” command in a screen reader
  • Tabbing Content read using the “tab” key in a screen reader
  • Heading Content read using the “next heading” key in a screen reader
  • Touch Content read when touching an area of screen on a mobile device

In the “What the user hears” column:

  • Commas represent short pauses in screen reader voicing
  • Full Stops represent places where voicing stops, and the “read next” or “tab” or “next heading” command is pressed again
  • Ellipsis … represent a long pause in voicing
  • (Brackets) represent voicing that requires a keystroke to hear