Blaming Screen Readers 🚩×5
The title of this post is pretty specific. It relates to the meme on Twitter where users identify a trait or preference that they see as problematic, and identify it as a red flag. The emoji represents the red flag. For example:
Blaming Screen Readers 🚩🚩🚩🚩🚩
And here we see the usual pattern repeat itself. An inaccessible meme goes viral. After it is so tired that brands use it, someone relying on assistive technology points out how annoying this can be. Authors and developers jump up to blame assistive technology for being terrible at internetting.
In the last few days I have seen more than a few people on Twitter blame screen readers for not being evergreen like browsers, for not understanding the context, for not returning just a count of emoji, and so on. I have even seen people post code snippets on how they could fix screen readers.
Screen Readers Are Not…
…to blame for your inaccessible content.
Screen Readers Are Not Browsers
To address one false assumption, screen readers do not read pages. Not exactly. Screen readers announce what information the web browser hands them. Screen readers will add instructions for operating things, but even that is based on how the browser reports it. In the context of the web, barring heuristics and bugs, the browser is in charge.
This means that all those bits of content, navigation, the states of controls, the count of how many items are in a list, cues for form field errors, and so on, are themselves built on what the developers write. The HTML.
I am leaving out a bunch of technical detail about accessibility APIs, the DOM, the virtual DOM, heuristics, and so on. I just want to impress upon you that browsers are what screen readers announce.
Screen Readers Do Not Use Natural Language Processing
Another false assumption is that screen readers understand the human content they are reading. They do not. Mostly. Screen reader heuristics will look at some strings of characters and announce them differently than what you may see (1st as first
). But even that varies across screen readers and browsers.
I have a long history trying to stop developers from overriding how screen readers announce things when it is not what they expect.
A screen reader does not know the context of what you wrote, the implications of what it contains, or even what you wanted to convey. It just reads words aloud the best it can, adding inflection based on punctuation and maybe some other cues.
Screen Readers Do Not See What You See
But here is a curve ball — the red flag emoji isn’t a red flag. It is a triangular flag on post
.
Blaming Screen Readers triangular flag on post triangular flag on post triangular flag on post triangular flag on post triangular flag on post
That is not the fault of screen readers. That is the risk of using emoji to convey meanings that are not part of the Unicode standard for the character. It is a fluke that they appear red. Platforms could make them yellow, or green, or striped, and so on.
The author intent is completely dependent on the arbitrary color in the emoji. Without it, the meaning you wanted to convey is completely lost.
Screen Readers Do Not Update Overnight
Screen readers are software with release cycles. They add features, fix bugs, and have to contend with browsers that change every six weeks, all at whatever pace they can muster.
Some are tied into the operating system, like VoiceOver, and historically only update with the operating system. Just because Apple is a trillion dollar company does not mean it will move any faster or even get it right. NVDA may be able to pivot more quickly, but it is open source and its release cycle reflects its revenue stream. JAWS has a public bug tracker, and you can see it has a lot sitting out there, more important than memes.
Never mind that the half-life of a typical meme is measured in days. Some are done and gone before the screen reader engineers have been able to get their VMs fired up to do regression testing.
Screen Readers Are Not Free
As in beer. There is a cost to being able to run a screen reader, particularly the latest release.
The disabled community is historically under-employed. This means older hardware, older software, less frequent access to tech support, to updates, and so on. The latest screen reader may require the latest browser. It may require the latest hardware (when built into the operating system). It may require more time and effort to even update than its users reasonably have.
Let’s not forget the opportunity cost. First think about everyone in your family who is not comfortable with technology. Now apply that same ratio to the disabled community. Now consider that if something goes wrong in their upgrade, that the tool that may be their lifeline is suddenly broken, and they cannot fix it. Now imagine the existential risk involved in upgrading to read a meme.
Screen Readers Are Not Stagnant
We have seen screen readers update to account for memes already. TalkBack used to ignore all the special characters used to mimic bold and italic text in tweets. Now it treats them as if they were regular ASCII letters.
The trade-off is that for users who had a genuine purpose to use those characters, whether for math or science, those characters are now lost to them.
I have also been told (sadly, it is anecdata and I have no specific example to show) that VoiceOver will ignore runs of emoji altogether. Which could be a problem if those emoji have meaning to convey, especially if the sender and receiver previously relied on them being announced.
How Should a Screen Reader Handle…
…the red flag meme? Some people suggested round them all up and give a count.
Although, Twitter could step in here too. We already know Twitter does some emoji processing on the fly, and it has a dedicated accessibility team. As the venue for the meme, it is in the best place to consider how (or if) it should concatenate those repetitive emoji.
How do you propose it handle some of the other memes that have been popular? That rely on spacing and position? That mix words and letters with symbols and emoji?
That Text Intersecting Year Thing
Clapping Hands
Sheriff
Sign Bunny
Building Jump
Peeking
Wrap-up
The better, more immediate, solution is to be more thoughtful in how you post your content (memes). Be considerate of others, even if it takes an extra minute. Stop offloading blame. Stop making it someone else’s problem.
Screen readers happen to be the focus of this post, but everything holds true for other disabilities and other assistive technologies. Videos without captions, blinking and flashing imagery, unnecessary animations, loud noises, terrible audio, CAPTCHAs, and so on.
Techniques to make your content accessible abound. They are no more than a quick search away should you care to try. Once a user (a fellow human) has raised a problem, you would have to actively work to ignore it. Which might make you kind of a jerk.
Also, while you are thinking of other people, wear a mask and get vaccinated.
Related
- Improving Your Tweet Accessibility, a general primer that goes into more detail on the special character and emoji considerations.
- Accessible Memes Can Be Done, specifically about image memes.
- Speech Viewer Logs of Lies, on how what they show is not what they announce.
Update: 7 August 2022
Julie Moynat has put together a video demonstrating some of the issues with Unicode characters in tweets. I have embedded it below, but you can also read her post Faux gras, caractères fantaisistes, abus d’émojis : le détournement des caractères Unicode, fléau pour l’accessibilité du web as a de facto transcript.
Update: 1 December 2022
Wendy’s went for an emoji-laden tweet with a video lacking audio description and failed to convey the tiny text saying vanilla is unavailable as long as this promotion is running.
Scarf TF snowflake graphic indeed.
Update: 24 August 2023
I use social media posts as prominent examples, but this also applies to web and software.
For example, it is not uncommon for developers to say they cannot do a thing because the technology being used is “not accessible” (or, if reading GitHub comments, “not a11y”) and then cite how it does not work in screen readers. Often this is a sign the technology itself is simply implemented incorrectly and has nothing to do with screen readers. An easy example is the Document Outline Algorithm, which was never implemented in any browser. In fact, a screen reader was the first one to run at it and, in so doing, demonstrated the algorithm was untenable.
For the more mundane experience of content not being announced as the developer expects, it is too common for the developer to then try to apply code like a hammer to force some idealized output. The right approach, of course, is don’t override screen reader pronunciation.
And stop blaming screen readers for poor software and problematic content.
Update: 5 September 2024
I made it into the Grammar Girl podcast (well, my point about hashtags and mixed case did). I am referenced at roughly 11:53.
YouTube: A nuclear win at the Oscars. CamelCase. One clo. 973 Grammar Girl
8 Comments
I will be referencing this in the future for sure, it will help me justify the “why” of accessibility to hesitant peers.
Thanks for writing it!
Hi Adrian, thank you for writing about this and also for providing video examples! I will definitely be referencing this in future. As for why the peeking meme changed languages, the peeking figure contains the katakana character ノ (no) and so voiceover is switching to Japanese to account for that. I have experienced similar for when people use Katakana in their twitter usernames to add flourishes.
In response to .Thanks, Shaun! My lack of Japanese language skill is clearly apparent. I suspect if I used VoiceOver for some of the other memes that use katakana characters I might have heard the same thing then.
The topics around ‘opportunity cost’ are appreciated.
I’ve noticed that the positive effect of Moores Law only goes so far. The price of JAWS versus Chrome Vox in 2022 hasn’t made all considerations evaporate.
On the website of Eleven Ways, a belgian accessibility consultancy company, there is an update to the research of Deque about how screen readers read punctuation and typographic symbols. The research is from March 2023.
In response to .Thanks, Régine! I saw that and kinda wished they included mobile screen readers as well as Firefox.
Related, in early 2022 Steve Faulkner pressed Ctr+Alt+↓ 5,400 times to gather all the JAWS announcements for Unicode characters. Broken up into a series of tables: Symbol text descriptions in JAWS.
A few days later, Steve pressed Ctr+Alt+↓ 3,900 times to gather all the NVDA announcements for Unicode characters. Broken up again into a series of tables: Symbol text descriptions in NVDA
I mention this in an accessibility 101 training I built for non profits. I’ll point to this excellent article for reference. Thank you for taking the time to record all these examples!
I will reference this everytime I try to create awareness around digital communications accessibility.
Thank you so much for taking the time to demo these problems!
Leave a Comment or Response