When it comes to building a comprehensive web accessibility plan, many teams find themselves assessing live-user testing in comparison with automated testing. In reality, if usability, compliance with the WCAG Compliance, and risk mitigation are the ultimate goals, this should not be a decision of one or the other.
The only proven process for ensuring an equitable digital experience is pairing manual testing, which includes people living with disabilities, such as native screen readers, in tandem with automated testing as part of a long-term accessibility approach.
In this blog, we will focus on the limitations of an automated-only web accessibility plan to highlight the need to lead with live-user testing.
It is undeniable that automated accessibility testing tools play a significant role in identifying certain accessibility issues, and they should be incorporated into a comprehensive digital accessibility strategy. However, it is important to note that they cannot present a complete and comprehensive overview of web or mobile accessibility barriers for end-users, nor can they guarantee risk mitigation.
Automated testing tools can effectively find issues such as missing alternative text for images, empty form labels, or improper use of heading levels. However, they cannot replicate the experience of using a website or mobile app with a disability or identify the comprehensive set of accessibility outlined in WCAG 2.1.
Here are some examples of accessibility issues that automated testing tools may not detect.
Inaccessible User Experiences
Automated testing tools may not identify issues with complex design functions that make it challenging for people with disabilities to navigate or use. For example, a dropdown menu may be difficult for someone with a motor impairment and create a complete blocker within the website’s critical functions, but not be highlighted by automated testing. Or, a key navigation menu could only expand in a ‘hover’ state with a mouse – an action that a native screen reader user or individual with a physical disability could not complete, which also would not be identified by automated testing.
Contextual Issues
Automated testing tools are limited as they do not consider the context in which the content appears on a website. For instance, a screen reader may misunderstand a sentence’s meaning if it is not clear from the surrounding text. Although automation is useful for many things, it is not a reliable method to assess surrounding content within elements like a real user going through the experience. Testing for WCAG Compliance is straightforward with automated testing, but manual testing becomes critical to identifying many accessibility issues and approaching an accessible experience that is not black and white.
Cognitive Accessibility Issues
In many ways, cognitive disabilities are not considered by many teams, specifically when an automated-only testing approach is considered. This includes a broad spectrum of users, such as those with an intellectual disability, autism spectrum disorders, or ADHD, to name a few. In many cases, automated testing tools will not identify issues with content or functions that may be difficult for these users to understand or navigate. For example, complex language or jargon are simple examples of issues that may be challenging for someone with a cognitive disability to comprehend. Still, no automated tool would define accessibility issues in most cases.
It is beneficial to use automated accessibility testing tools as part of your digital accessibility strategy. However, it is important to note that these tools may not catch all types of accessibility issues. Live-user accessibility testing is recommended to get feedback on compliance violations and the usability of assistive technology like screen readers. This type of testing involves individuals with disabilities navigating and using your website or digital property to provide their feedback and insights.
For more information about live-user testing, you may want to read this article: