Are your third-party tools accessible?

Published on

The moral of this story is automated accessibility checkers can fail and you should always manually test your external tools not just the product you're working on.

I recently saw a post about how an automated accessibility checker could fail to detect extreme amounts of inaccessibility. A few days later after reading the post I coincidentally got asked to accessibility check a new survey we were doing. Our surveys are something I’ve never really got around to prioritising before for accessibility checking, but the purpose of this survey was to ask a specific group (not the general public) of people about the accessibility of the website. So it seemed important that the survey was accessible otherwise it could cause people to conflate the two and our results would not be as useful.

Long story short, our user researchers were so concerned with my findings that they decided to send an email to the group instead of the intended survey.

Without going into the full analysis and without naming any names (we’ve contacted the company involved and hope for a fix shortly), I figured it is worth pointing out something I have said many times before. In the past it has been fuelled by my general gut instinct about how I feel some developers behave and mistakes I made myself in the past. However, this is a very large product and it has very clearly made the exact same mistake that I’ve so often felt is common… misuse of ARIA.

You see, HTML is pretty accessible by default. There may be some exceptions but generally using ARIA should be a last resort. For example <button> is better than <div role="button">. There are plenty of articles around the web that explain in detail why this is the case. My personal feeling is that ARIA should be learned about, it means you care about accessibility to some extent, but the sooner you forget it the better. There’s very rarely a reason to use ARIA that could not be better done a different way.

So what mistake did this survey have? There were a few but the major one that would have made it embarrassing to give to a group likely to be using screen readers is that every checkbox on the survey had multiple labels. The input has aria-labelledby as well as a <label> linked to every input. This could be an accidental mistake but to me it feels like an attempt at accessibility (and I applaud that) but with almost a fundamental misunderstanding of how it all works. In this case <label> is perfectly fine on its own without the aria-labelledby which is used for more complicated situations where a label would not work (think for example if you wanted to polyfill <details>/<summary>).

These multiple labels caused screen readers to read out the content of the label (which were both identical) twice. In the real world if this happened people would ignore it and be able to use it fine, they have to cope with enough as it is. But you can see why giving it to this group of people would cause a really bad first impression.

I’m very glad our user researchers thought to ask me to give it a quick run through. They very easily could have thought it was fine, because they had actually already run the survey through a tool provided as part of the survey product and that had said “Accessibility: WCAG Passed”. So they had, very rightly, not quite trusted the automated accessibility tool to spot a mistake.

Which brings me full circle to that post I mentioned. It is really important to have tools that act as a heads up. We can’t all run full accessibility audits for every change we make. But it is so important to know they are not infallible. It’s quite surprising what issues even just five minutes of using a screenreader and other assistive technology can raise. Oh and please remember focus states, how is anyone meant to navigate a survey with a keyboard when there is no focus state on the checkboxes.