067: Mobile Accessibility: Building and Testing Accessible Mobile Sites & Native Apps with Gian Wild

This episode is a recording of an April 2024 WordPress Accessibility Meetup where Gian Wild, CEO of Accessibility Oz, explored the nuanced and challenging world of testing mobile sites and applications for accessibility. If you want to watch a video recording from the meetup, you may do so on the Equalize Digital website: Mobile Accessibility: Building and Testing Accessible Mobile Sites & Native Apps: Gian Wild.

Listen

Summarized Session Information

This session, led by Gian Wild, explores the nuances of mobile accessibility, focusing on the unique challenges and methodologies for building and testing accessible mobile sites and native apps. Gian provides an in-depth look at the mobile-specific issues, the latest updates from WCAG 2.2, and the practical steps involved in the mobile and native app testing methodologies. The presentation highlights the evolution of accessibility standards with WCAG 2.1 and the newly introduced WCAG 2.2, emphasizing the importance of ensuring that digital content is accessible across different devices and platforms.

Key topics include the testing of critical accessibility issues known as “traps,” mobile-specific errors, and ensuring compatibility with mobile assistive technologies. Gian also discusses the necessity of updating and refining testing processes, the removal of redundant assistive technologies, and the development of online resources to enhance the accessibility community’s tools.

This presentation is designed to provide attendees with a comprehensive understanding of mobile accessibility, offering insights into effective testing strategies and the importance of continual learning and adaptation in the field of web accessibility.

Session Outline

  • Introduction: Mobile Site Testing Methodology
  • Mobile issues
  • Testing methods
  • Mobile site & native app testing methodologies
  • Updates from WCAG 2.2
  • Additional assistive technologies/mobile features
  • Review of existing test cases
  • Removal of some assistive technologies/mobile features
  • Conclusion and resources

Introduction

AccessibilityOz is a company founded in 2011 by Gian Wild to advocate for and support individuals with disabilities. The team, consisting significantly of members with diverse disabilities—ranging from dyslexia and vision impairments to chronic illnesses like multiple sclerosis and long COVID—reflects the broad spectrum of “hidden” disabilities that are not immediately apparent.

There’s a common misconception that accessibility primarily concerns vision impairments and screen reader use. However, there’s an actual prevalent need among people with cognitive disabilities, which includes a variety of conditions from epilepsy to reading disorders.

Gian began her accessibility journey in 1998 with significant contributions such as working on Australia’s first accessible website, creating an automated accessibility testing tool, and engaging in global discussions on web accessibility standards, including the W3C’s WCAG 2. Her leadership extended to Monash University roles and notable involvement in multiple Commonwealth Games projects, demonstrating her long-standing commitment to enhancing web accessibility.

In 2017, Gian chaired the mobile site and native app accessibility testing subcommittee, recognizing the unique challenges and overlaps between mobile and native app accessibility. This initiative led to the development of merged mobile accessibility testing guidelines released in 2018, aimed at filling the gaps in the existing WCAG standards and better addressing the specific needs of mobile users.

Through these efforts, Gian and her team have worked towards proactive solutions in the evolving landscape of web accessibility, setting the stage for her detailed exploration of building and testing accessible mobile sites and native apps.

WCAG 2 vs. 2.1

WCAG is the essential framework for ensuring digital content and websites are accessible. While WCAG 2.0 addressed basic accessibility features, such as keyboard navigability, it fell short in encompassing the full spectrum of mobile accessibility, notably omitting criteria like touchscreen adaptability.

WCAG 2.1 was developed to bridge some of these gaps, introducing criteria aimed at improving touch interactions, pointer gestures, and sensor use on small-screen devices. Despite these advancements, significant gaps remain, particularly in the realm of touch-target sizes, which are critical for ensuring that interactive elements on touchscreens are usable. Unfortunately, the inclusion of touch-target size requirements in the less commonly mandated AAA level of compliance means that they are often overlooked, as most regulations and implementations focus on the AA level.

Recognizing the inadequacies of WCAG 2.1 in fully addressing mobile needs, Gian led a reformative effort through a committee that reevaluated mobile accessibility guidelines. This committee, comprising experts from major accessibility organizations, amalgamated their individual guidelines to eliminate redundancies and refine the standards.

The result was the release of a comprehensive mobile site and native app accessibility testing methodology in late 2019. This methodology specifically targets mobile accessibility elements not fully covered by WCAG 2.1 and clarifies expectations where WCAG 2.1 may be ambiguous or insufficient.

Despite being a collaborative effort across various organizations, these guidelines are hosted exclusively on the AccessibilityOz website due to practical issues such as frequent updates and typo corrections. This proactive approach highlights the ongoing need for specialized guidelines that cater specifically to the unique challenges posed by mobile accessibility beyond what current WCAG versions provide.

Mobile issues

Mobile accessibility features

There are distinct differences between mobile and desktop environments regarding accessibility, especially how mobile devices offer unique accessibility features that are integral for users with disabilities.

Unlike desktops, mobile devices commonly use native screen readers such as TalkBack for Android, VoiceOver for iOS, and Voice Assistant for Samsung devices. This standardization in mobile platforms contrasts with the PC environment, where users might choose from various accessibility tools like NVDA, ZoomText, or JAWS, leading to a less predictable user experience.

Additionally, mobile devices provide a suite of accessibility options that are often more prominently utilized than on desktops. These features include volume control, haptic feedback on keyboards, diverse notification settings (visual, auditory, and vibrational), screen rotation, mono audio, voice control, and the ability to increase text size or reduce motion. Mobile devices also offer specialized views like Zoom, Reader, and Simplified View that enhance readability and navigation.

Mobile users are generally more aware of and adept at utilizing built-in accessibility features compared to desktop users. This greater reliance on mobile accessibility tools means that any shortcomings in mobile accessibility can affect a larger and more dependent user base, making robust mobile accessibility support crucial for a wide range of applications.

Page variations

Under the Web Content Accessibility Guidelines (WCAG 2), websites must be navigable and usable at 200% zoom, a standard part of accessibility testing designed to aid users with visual impairments. However, a common issue that arises with this requirement—when a desktop site is zoomed to 200%, it often switches to the mobile version of the site. This change can lead to a significant problem if the mobile version is not a direct equivalent of the desktop version in terms of content and functionality.

This discrepancy can severely limit access for users with low vision, who rely on Zoom to read and interact with web content. If the mobile version contains less content or lacks certain functionalities on the desktop version, these users are effectively barred from a full web experience.

WCAG 2.1 accessibility supported

Accessibility supported is a conformance requirement within WCAG 2.1 that emphasizes the necessity of implementation techniques which support assistive technologies commonly used on mobile devices, such as TalkBack, VoiceOver, and Switch Control. This requirement underscores the importance of practical, real-world testing with assistive technologies, especially for native apps—a practice that differs significantly from the traditional approach taken with desktop websites.

Gian advocates for a very specific methodology in this testing: those who use assistive technologies on a daily basis due to their own requirements should be the primary testers. For instance, she mentions a colleague who is legally blind and conducts 99% of their screen reader testing. His daily use provides him with a level of expertise and familiarity that far exceeds what a sighted tester could achieve. His approach and insights into using these technologies are inherently different and more nuanced than those of someone who does not rely on them regularly.

Furthermore, the original intentions behind WCAG 2 aimed to be technology-neutral. The goal was to establish guidelines that would not require specific testing with assistive technologies such as JAWS or NVDA. Instead, it was hoped that by meeting WCAG 2 requirements, assistive technology manufacturers would adapt their products to work seamlessly with sites that adhered to these guidelines. However, this ideal has not translated as effectively in the mobile and native app spheres. Unlike web environments, native apps often do not allow access to their underlying code, complicating direct accessibility enhancements. Moreover, the prevalent use of assistive technologies on mobile devices compared to desktops necessitates a more hands-on approach to ensure usability.

Android assistive technologies and features

  • TalkBack
  • Keyboard
  • Keyboard and switch
  • Magnification
  • Remove animations
  • Color Inversion
  • Grayscale
  • Color Correction
  • Increase display size
  • Increase font size
  • Increase text size (with Chrome)
  • Simplified view (mobile sites only)

iPhone assistive technologies and features

  • VoiceOver
  • Keyboard
  • Keyboard and switch
  • Zoom
  • Reduce Motion
  • Invert colours
  • Grayscale
  • Larger text (native app only)
  • Reader view (mobile site only)
  • Reader view and increase text size (mobile site only)

iPad assistive technologies and features

  • VoiceOver
  • Keyboard
  • Keyboard and switch
  • Zoom
  • Reduce Motion
  • Invert colours
  • Grayscale
  • Larger text (native app only)
  • Reader view (mobile site only)
  • Reader view and increase text size (mobile site only)

Testing methods

Mobile sites

The four main testing methods in mobile testing are testing on devices, which is on mobile and tablet devices; devices with assistive technology, which is basically a mobile or tablet device with an assistive technology or mobile feature; a responsive window, which is basically a responsively-sized window on desktop, and desktop.

Native apps

There are only two testing methods for native apps: devices and devices with assistive technologies. As a committee, it’s strongly recommended that you do not use simulators. Simulators do not represent sites well when they’re out in the real world or on an actual device.

Test with real devices

Testing often occurs under idealized conditions on large monitors, which do not accurately represent the user experience on much smaller, handheld devices. This can lead to overlooked issues, such as text size being too small for comfortable reading on mobile devices. Furthermore, Gian warns against relying on simulators to replicate the functionality of assistive technologies. While some simulators claim to mimic these technologies accurately, they often fail to capture the nuanced ways in which these tools are used by people with disabilities.

Even a minimal set comprising just an iPhone and an Android phone can provide invaluable insights into the user experience. This approach ensures that testing reflects the actual conditions and challenges users will face, rather than the idealized scenarios often presented by simulators. This method allows developers and testers to identify and address potential issues more effectively, leading to better accessibility outcomes for all users.

Mobile site and native apps testing methodologies

Mobile site and native apps testing methodologies consist of five crucial steps designed to ensure comprehensive accessibility evaluations, though the specific tasks under each step can vary between mobile sites and native apps.

For mobile sites, the methodology follows this sequence:

  1. Identify devices: this initial step involves selecting the range of devices on which the mobile site will be tested, ensuring a broad and representative sample that reflects the variety of user experiences.
  2. Identify site type and variations: this step requires understanding different site versions (such as mobile and desktop versions) and how they differ in content and functionality.
  3. Test critical issues: testers focus on identifying and resolving critical accessibility issues that could severely impact the user experience.
  4. Test mobile-specific issues: checking for issues unique to mobile usage, such as touch interactions and screen orientation.
  5. Test mobile assistive technology and feature support: the final step ensures the site works well with mobile-specific assistive technologies and features, like screen readers and voice control tools.

Step 1: Identify devices

The first step of the testing methodology for mobile sites and native apps is critical: choosing the right devices for testing. The selection process should reflect the diverse range of devices the target audience uses, which can vary significantly depending on geographic and demographic factors. For instance, while iOS devices are more popular in Western countries like the United States, Australia, and the UK, Android devices dominate in many Asian and Eastern countries.

To make informed decisions about which devices to test, it’s recommended to utilize data from Google Analytics or similar analytics tools to understand which devices are most frequently used to access your site. This data can guide the prioritization of testing efforts, ensuring that the most commonly used devices are tested first. However, it’s important to note that even if one type of device dominates your analytics, it’s still crucial to test on other platforms to ensure comprehensive coverage.

For Android devices, you should test on the latest version of the Android operating system using Chrome, as this setup will most closely represent the experience across the majority of Android devices. You shouldn’t rely on the native internet browsers found on devices like Samsung phones unless significant user data supports their use, due to their dependency on specific operating systems.

Furthermore, you should test both mobile and desktop versions of a site, even for technologies that are consistent across platforms, like VoiceOver. This is because the behavior of such technologies can differ between desktop and mobile due to variations in the underlying code.

The recommendation is to use VoiceOver for iOS and TalkBack for Android for assistive technology testing. However, if there is a significant number of users on platforms like Samsung or Amazon Fire, it may be necessary to test with their specific screen readers, Voice Assistant and VoiceView, respectively.

Additionally, you should test on both iPhone and iPad due to their different operating systems, as well as on Android tablets and potentially Kindle devices to cover various user scenarios.

This step is about ensuring that the testing environment accurately reflects the range of devices and technologies used by the actual audience, thereby maximizing the effectiveness of the accessibility testing process. This thorough approach helps ensure that all potential users, regardless of device preference or geographic location, receive an accessible and user-friendly experience.

Step 2: Define application functionality

In the second step of the testing methodologies for mobile sites and native apps, distinct approaches are tailored to each platform. These approaches focus on defining the functionality that needs to be tested to ensure accessibility.

For Mobile Sites: it is important to identify the type and variations of the site being tested. There are three main types of sites:

  1. Desktop sites: these maintain a consistent display regardless of the device used, be it desktop, mobile, or tablet.
  2. m-Dot sites: these have a separate mobile-optimized version distinct from the main site, often with a different code base. This type requires extensive testing as both the mobile and the standard desktop versions need to be evaluated on various devices.
  3. Responsive sites: these adjust their layout and functionality based on the screen size or other criteria defined by the developer. Gian recommends that changes to the site layout should be triggered by screen size rather than device detection (browser sniffing) to facilitate easier access across different versions.

For Native Apps: The approach shifts significantly when testing native apps. Here, the focus is on defining the application’s core functionality. This involves understanding the primary purpose of the app and identifying key functionalities critical to the user’s experience. This step is crucial as native apps tend to have more specific and targeted functionalities compared to websites. For instance, a banking app will focus primarily on transactions and account management, contrasting with the broader content range on its corresponding website.

Testers should consider how the user experience would be affected if certain functionalities failed or if barriers were encountered during the interaction. This evaluative approach helps prioritize which aspects of the app are most critical and, therefore, should be tested more rigorously. Common elements to test in native apps include navigation, landing screens, login processes, settings, contact features, and real-time updates.

By meticulously defining and understanding the functionalities that need testing, developers and testers can ensure that mobile sites and native apps offer an accessible and user-friendly experience, adhering to the best practices in mobile accessibility.

Step 3: Test critical issues

In the third step of the testing methodologies for mobile sites and native apps, we must identify and address “critical issues,” also referred to as “traps.” These traps are problematic areas where a user gets stuck within a component of the website or app and cannot exit without closing the entire application or browser. These issues are more prevalent in mobile environments but are also increasingly observed on desktop sites.

Types of Traps:

  1. Exit trap: this occurs when a user encounters a full-page overlay, such as an advertisement, without an accessible, clearly marked exit option. An example is a full-page ad on Facebook with no functional back button or close option, leaving users unable to exit unless they find a small, often non-compliant, area to click.
  2. Swipe/Scroll trap: known as the “zoom of doom,” this trap happens when a page element, like a map, overtakes touch interactions meant for page navigation. Users find themselves unable to scroll past the map because all swipes only zoom or navigate the map itself. Modern solutions include requiring two fingers for map interactions to free up single-finger swipes for page scrolling.
  3. Text-to-Speech trap: in this trap, users who utilize text-to-speech functions cannot easily pause or stop the speech. For instance, an app might play an article aloud without providing an accessible pause button, forcing screen reader users to listen to overlapping audio without the ability to stop it.
  4. Headset trap: users must be able to control media playback through headset controls. A common issue arises when pressing the pause button on a headset, which pauses the screen reader instead of the media content, complicating the user’s ability to manage audio playback effectively.
  5. Layer trap: this occurs when users are unable to interact with a newly opened layer because the focus remains on the underlying page. An example is a navigation menu that appears over the page but cannot be accessed by a keyboard or screen reader, rendering it invisible and inaccessible to users who rely on these technologies.

Each of these traps represents a severe accessibility barrier that can drastically impact the user experience, particularly for those relying on assistive technologies. To ensure a fully accessible mobile site or native app, developers and testers must rigorously identify and remediate these traps during the testing phase.

Step 4: Test mobile-specific issues

In the fourth step of the testing methodologies for mobile sites and native apps, we identify and resolve mobile-specific errors. This step is crucial because it addresses issues that uniquely affect mobile users, encompassing a variety of elements such as alternatives, display, actionable items, navigational aids, audio and video, forms, and interactions between mobile and desktop platforms.

Categories and examples of mobile-specific issues:

  1. Alternatives:
    • Touch gestures: each touch gesture, like swiping or double-tapping, must have an accessible alternative, such as a link or button. This ensures that actions are accessible to users who cannot perform gestures due to physical limitations or because they are using assistive technology.
  2. Display:
    • Target size: Critical for touch interaction, the size of touch targets must be at least 44×44 CSS pixels to ensure they can be accurately activated by users, including those with physical disabilities. This is more stringent than the WCAG 2.1 AAA criteria, emphasizing its importance in mobile contexts.
  3. Actionable items:
    • Color alone: Using color alone to indicate actionable items is discouraged unless it meets a contrast ratio of 3:1. This is because mobile devices are used in varied lighting conditions, and what might be clear in one setting may not be in another.
  4. Navigational aids:
    • Visual indicators: visual cues such as arrows or partial visibility of additional content are crucial to indicate that more information is available through certain actions like swiping.
  5. Audio and video:
    • Transcripts and captions: to cater to users with hearing or visual impairments, ensure that all audio and visual content is accompanied by captions and accurate transcripts that describe both spoken words and visual actions.
  6. Forms:
    • Field labels: proper placement and visibility of field labels are vital to ensure they are associated with the correct input field, especially on small screens with limited space.
  7. Mobile and desktop interaction:
    • Consistency and linking: it is important that users can navigate between mobile-specific sites (m-dot) and desktop versions seamlessly, particularly when the mobile site offers limited functionality.

This step not only focuses on functional testing but also emphasizes the user experience from an accessibility perspective, ensuring that all interactions are intuitive and accessible on mobile devices.

Step 5: Test mobile assistive technology and feature support

In the fifth step of the testing methodologies for mobile sites and native apps, there’s the critical need to ensure that all content and interactive elements are fully accessible and functional with various mobile assistive technologies and accessibility features. This step is designed to verify that the application or website is not only usable but also fully accessible to users who rely on these technologies.

Key assistive technologies and features to test include:

For Android devices:

  • TalkBack: Android’s screen reader, which reads aloud on-screen content.
  • Keyboard and Switch Access: tools for users who cannot interact with a touchscreen directly.
  • Magnification: allows users to zoom into parts of the screen for better visibility.
  • Color adjustment features: includes removing animations, inverting colors, applying grayscale, and correcting color deficiencies.
  • Text size adjustments: increasing display and font size to make text easier to read.

For iOS devices (iPhone and iPad):

  • VoiceOver: iOS’s built-in screen reader.
  • Zoom and reduce motion: features to help users with visual and motion sensitivities.
  • Text and display enhancements: options like invert colors, grayscale, and larger text settings tailored specifically for native apps, with additional reading modes like Reader View available on websites.

Testing methodology and examples:

The process involves ensuring that every interactive component can be accessed and activated using these technologies. For instance, Gian cites an example where social media links on a website are only announced as “link” by VoiceOver because they lack accessible names. This lack of detailed labeling prevents VoiceOver users from understanding the purpose of each link, which significantly hampers usability.

Another example provided is from an Android device using TalkBack, where activating a crossword clue should ideally lead the user directly to the corresponding input field in the crossword puzzle. However, instead of performing this expected action, the clue merely toggles the orientation from horizontal to vertical, confusing the user and failing to assist in solving the crossword.

These examples highlight typical issues that can arise when mobile apps and websites are not thoroughly tested with assistive technologies. This testing ensures that all functionalities are operable in a way that accommodates the needs of users with disabilities, making the digital content accessible to a broader audience. By addressing these issues, developers can improve the overall quality and inclusivity of their digital offerings.

Updates from WCAG 2.2

The latest updates are included in WCAG 2.2, which was released in October 2023. These updates expand the existing guidelines with additional criteria at various compliance levels—Level A, AA, and AAA. These criteria are designed to further enhance accessibility, particularly in areas that affect a wide range of users, including those with disabilities. It’s important to note that these updates are quite new and have not yet been widely adopted by most governments.

New Success Criteria in WCAG 2.2:

  1. Level A:
    • 3.2.6 Consistent Help: this criterion requires providing consistent mechanisms for help across multiple pages of a site, ensuring that users can reliably find assistance when needed.
    • 3.3.9 Redundant Entry: this aims to reduce the need for repeated input by ensuring that information previously entered by the user is auto-populated where applicable.
  2. Level AA:
    • 2.4.12 Focus Not Obscured: ensures that the current keyboard focus is clearly visible and not obscured by other elements on the page.
    • 2.5.7 Dragging Movements: requires that actions which typically require dragging have accessible alternatives for users who cannot perform these gestures.
    • 2.5.8 Target Size (Minimum): similar to previous guidelines on target size, this criterion emphasizes the importance of having touch targets that are of adequate size to interact with easily.
    • 3.3.7 Accessible Authentication: focuses on simplifying the authentication process to make it easier for users to access secure areas of websites or apps without complex interactions.
  3. Level AAA:
    • 2.4.12 Focus Not Obscured (Enhanced): an enhanced version of the AA criterion, ensuring that the entire keyboard focus area is visible.
    • 2.4.13 Focus Appearance (Enhanced): requires that the keyboard focus indicator is visible and clear to all users, improving visibility for those with visual impairments.
    • 3.3.8 Accessible Authentication (Enhanced): an enhanced criterion for making authentication processes even more accessible, particularly for users with cognitive disabilities.

These updates reflect ongoing efforts to address the dynamic challenges of web accessibility as technology and user needs evolve. By incorporating these new criteria, WCAG 2.2 aims to provide a more inclusive web environment that accommodates a broader range of disabilities and interaction modes. For developers and organizations, understanding and implementing these updates is crucial for building websites and applications that are not only compliant with the latest standards but are also genuinely accessible to all users.

Additional assistive technologies/mobile features

There are also additional assistive technologies and mobile features that we should include, including Voice Control, increasing text size and font color from Reader View, and testing with a mouse.

Review of existing test cases

Some cases need review, such as Text-to-Speech traps that were only in the native app methodology and started to occur on mobile sites. The mobile and desktop consistency is great, but we also need consistency between websites and native apps.

Removal of some assistive technologies/mobile features

We need to streamline the testing process by identifying redundancies in assistive technologies and mobile features that do not provide additional value in testing scenarios. This approach is aimed at making the testing process more efficient while still ensuring comprehensive coverage of functionality that affects user accessibility. Here are the specific redundancies identified:

  1. Grayscale:
    • Available on both iOS and Android, grayscale mode alters the color scheme of the display to shades of gray. Testing this feature on one platform is often indicative of its behavior on the other, suggesting a potential area to reduce duplicate testing efforts.
  2. Color Correction and Color Inversion:
    • Color correction in Android and color inversion features in both Android (Color Inversion) and iOS (Invert Colors) often exhibit similar outcomes in accessibility testing. Therefore, testing one of these features could suffice for both, reducing the amount of redundant testing.
  3. Motion Reduction Features:
    • Reduce Motion in iOS and Remove Animations in Android are designed to limit motion effects that can cause discomfort or nausea in sensitive users. Gian notes that these features generally fail or pass tests under the same conditions, suggesting that testing one would likely reflect the result of the other.
  4. Classic Invert vs. Smart Invert:
    • Classic Invert, an older feature, does not provide additional insights compared to Smart Invert. Smart Invert is designed to intelligently adjust the colors on the display for better visibility and comfort, making Classic Invert redundant for testing purposes.

By identifying these overlaps and redundancies, Gian proposes a testing strategy that is not only more efficient but also focused on yielding meaningful results. This strategy helps in prioritizing testing efforts on features that have distinct impacts on accessibility, ensuring that resources are used effectively to enhance the overall user experience on mobile devices.

Conclusion and resources

In her conclusion, Gian Wild emphasizes the ongoing need to evolve the accessibility resources available to developers and testers. She proposes the transformation of the current documentation from Word documents into a more dynamic and accessible online resource. This web-based format would offer several advantages including ease of search, the ability to update content swiftly, and improved usability and accessibility. Such enhancements would likely increase the resource’s utility and adoption among professionals in the field.

Gian shares her personal experience with the project, noting that while she initiated the conversion of these resources into an online format last year, her commitments to her primary job have slowed progress. This reality reflects a common challenge in the field—balancing paid work with the voluntary efforts needed to advance accessibility practices.

Additionally, Gian mentions efforts to reform the committee responsible for these guidelines and encourages participation from those interested in learning about mobile accessibility. She highlights that involvement in such committees is not only beneficial for the accessibility community but also serves as a profound educational experience for the members. Engaging in this committee work provides practical insights and knowledge about accessibility issues and solutions, making it an excellent opportunity for professional development in the field.

This call to action underscores the collaborative nature of accessibility work and the importance of community involvement in driving improvements and innovations in making digital content accessible to all users.

Transcript

>> CHRIS HINDS: Welcome to episode 067 of the Accessibility Craft Podcast, where we explore the art of creating accessible websites while trying out interesting craft beverages. This podcast is brought to you by the team at Equalize Digital, a WordPress accessibility company and the proud creators of the Accessibility Checker plugin.

This episode is a recording of an April 2024 WordPress Accessibility Meetup where Gian Wild, CEO of Accessibility Oz, explored the nuanced and challenging world of testing mobile sites and applications for accessibility. WordPress Accessibility Meetups take place via Zoom webinars twice a month, and anyone can attend. 

For show notes, a full transcript, and additional information about meetups, go to AccessibilityCraft.com/067.

And now, on to the show.

>> AMBER HINDS: I am very excited to introduce today’s speaker, Gian Wild. Gian, if you don’t mind turning on your camera, then I’ll be able to add a spotlight for you, but Zoom doesn’t let me do that. There we go. Gian is the CEO, founder, and president of AccessibilityOz, which was established in Australia in 2011 and the United States in 2015. Gian has worked in the accessibility industry since 1998. She worked on the first Level AAA accessible website in Australia, which is Disability Information Victoria, and developed one of the first automated accessibility testing tools, PurpleCop, in 2000.

She spent six years on the W3C Web Content Accessibility Guidelines Working Group, contributing to the development of WCAG 2, and is currently a member of the Automated WCAG Monitoring Community Group. I also had the opportunity to hear Gian give a similar talk for WordPress Accessibility Day Conference and really learned a lot. I’m very excited to have her here speaking with us today. Welcome, Gian.

>> GIAN WILD: Oh, thank you for having me. Today, I am talking about mobile accessibility. I apparently cannot share my screen. Oh yes. Now, I can. [laughs]

>> AMBER: I’m just stopping mine. As you’re getting sharing, I’ll give my final announcement, and then I will pop off here and let you take it away, and that is that we will be using the Q&A feature in Zoom. If anyone has questions, it’s easiest for us if you can please put them in the Q&A section rather than the chat because sometimes things get lost in the chat. Then I will come back on at the end and we will answer questions. I’ll let you take it away.

>> GIAN: Excellent. Thank you. Thank you, Adrienne. I am talking today about building and testing accessible mobile sites and native apps. Before we go any further, I’d like to acknowledge the Yuggera people as the traditional owners of the land that I’m currently on. I’m in the Gold Coast Hinterland. You can access this presentation from the AccessibilityOz website. If you go to the About section and then Conferences, it will be the conference linked there. The PowerPoint is linked, which is about 80 megabytes. There’s two PDFs, Part 1 and Part 2, about 6 megabytes each. If you have any trouble accessing them, please feel free to reach out to Amber, and I’ll make sure that I can send you a version that works for you.

I want to start off by just talking very briefly about accessibility, meeting our team. This was taken in 2018 BC, I like to say, “before COVID.” This is about two-thirds of our team. When I started AccessibilityOz in 2011, I wanted to support people with disabilities by providing them with employment opportunities while making the world a more accessible place. At any one time, about half to two-thirds of my staff have some kind of significant disability. Of course, you don’t know that to see, to look at us. That’s something to be really aware of when it comes to disability. It’s often what I say, “hidden in plain sight.” You don’t know whether the people you’re working with have a disability or not, so please don’t make any assumptions.

Over the years, I’ve had a whole bunch of different people with different disabilities work for me, including dyslexia, moderate vision impairment, severe vision impairments, epilepsy, migraines, physical impairments, fibromyalgia, multiple sclerosis, Crohn’s disease, post-traumatic stress disorder, autism, and long COVID. What’s also important to know is it’s not just about vision impairments. A lot of people, especially when they’re new to accessibility, think that accessibility is only about screen reader accessibility.

It definitely was a focus when accessibility started– well, when we started talking about web accessibility 20-odd years ago, but that’s not something that we should focus on now. In fact, the largest group of people with disabilities who use the web are people with cognitive disabilities. That’s because the largest group of people with disabilities are people with cognitive disabilities. I don’t mean just intellectual impairments. I mean things like epilepsy and migraines and dyslexia and other forms of reading disabilities.

A little bit about me just so you know that I do know what I’m talking about. I started, although this was kind of covered before, started in 1998. As Amber mentioned, I worked on the very first accessible website in Australia. I created Australia’s first automated accessibility testing tool, spent six years with the W3C working on WCAG 2. I worked on the Melbourne 2006 Commonwealth Games.

I’ve actually worked on every Commonwealth Games since then, including the aborted Victoria 2026 Commonwealth Games, and still waiting for an explanation about that. Then I spent five, six years managing usability and accessibility services at Monash University. I left there to found AccessibilityOz, released OZPlayer, an accessible video player in 2011, released OzART, our automated accessibility testing tool, in 2014, spoke at the United Nations on the importance of web accessibility, specifically regarding the Black Saturday bushfires in Victoria in 2015.

I was nominated for Australian of the Year in 2016 and I was inducted into the Centre of Accessibility’s Hall of Fame as Accessibility Person of the Year in 2019. One thing that is missing is from 2017, I have been chair of the mobile site and native app accessibility testing subcommittee. Now, that is a mouthful. I’d like to know a shortened version, [laughs] trying to come up with a good acronym. That’s what I’m talking about today.

Let’s start with a little background. There is this methodology. There are two methodologies that are very closely related. There’s the mobile site accessibility testing methodology and there’s native app accessibility testing methodology. There’s probably an overlap of about 85% that applies to both. Why did we develop this? What happened is that in 2017, I was at the ICT Accessibility Testing Symposium. This is a conference specifically aimed at accessibility testers. Thankfully, it is available virtually as well. If you are interested, please have a look out for that. It usually occurs in October each year. It’s based in DC. As I said, they usually have a virtual component.

At the end of this symposium, there was this discussion about– we call them a town hall, but basically a discussion about what the industry needs. In 2017, the overwhelming discussion was around mobile accessibility. WCAG 2 definitely applies to mobile, but there’s still a lot that WCAG 2 doesn’t cover. We were really thinking, “Oh okay, WCAG 2.1 is coming out at some stage. Hopefully, that will address everything, but we’re sick and tired of waiting. We’ve been waiting a while. What can we do until then?” All the big accessibility testing companies, they attend this conference.

We decided to put together a subcommittee to amalgamate everyone’s mobile accessibility testing guidelines. AccessibilityOz had mobile testing guidelines. TPG had mobile testing guidelines. Deque had mobile testing guidelines. Level Access, they’ll call it something else, had mobile accessibility testing guidelines. We just basically merged them all, got rid of the repeats, et cetera, and we released them in October 2018. It took us a year. We honestly thought that was a one-and-done situation. We thought WCAG 2.1 is going to be released. We don’t need to worry about it after that. WCAG 2.1 was released in June 2018.

Let’s talk about WCAG 2.1. Before we talk about WCAG 2.1, though, we need to talk about WCAG 2. WCAG, for those who are really new to accessibility, the Web Content Accessibility Guidelines, they are basically what you follow to make sure your website and digital content is accessible. WCAG 2 success criteria are definitely applicable to mobile. However, not all aspects of mobile accessibility are covered by WCAG 2.

For example, although WCAG 2 requires sites to be accessible to the keyboard user, it does not specify that it should also be available to the touchscreen user. That’s what we’re waiting for WCAG 2.1 to do. WCAG 2.1 definitely built on this and addressed more criteria relating to touchscreen, pointer gestures, sensors, and small-screen devices. However, it still does not cover all the user needs related to mobile accessibility. How did we come up with that conclusion? We released our guidelines just before the 2018 testing conference.

The town hall in the 2018 testing conference, the overwhelming discussion was around how WCAG 2.1 did not adequately address mobile accessibility. The really short version of that conversation is that every single mobile testing guideline that we looked at, all the ones that we’d amalgamated, agreed on a few things. One of the things that they all agreed on was touch-target size. Basically, when you’re trying to touch your screen that the actionable item is a sufficient size. There were differences in how big they should be, 15 pixels, 40 pixels, et cetera. We all agreed that there needed to be a minimum touch-target size requirement for links and fields and buttons and things like that.

WCAG 2.1 did include a touch-target size requirement. However, it was in AAA. AAA is where I say success criteria go to die. Most of the world requires AA compliance. There’s not one single country I know that requires AAA compliance. 99% of people ignore AAA requirements completely. There was that issue and a number of other issues as well that made us realize that WCAG 2.1 was not sufficient. We didn’t believe that if you followed WCAG 2.1 when building your website that it would be accessible to people with disabilities on a mobile device.

We didn’t think if you followed WCAG 2.1 building your native app that it would be accessible to people with disabilities. We reformed the committee. It’s a very long story. We started from scratch. We looked at everything, all the requirements, and all the guidelines. We looked at AAA requirements. We looked at articles. We talked to people with disabilities. At the end of 2019, we released the mobile site and native app accessibility testing methodology.

One thing to remember about the methodology is that it does not include those errors already included in WCAG 2. It doesn’t say your images need alt attributes and your headings need coded headings because that’s already in WCAG 2, but it does include those errors in WCAG 2.1. We wanted to include that just in case, say, a country had a policy of meeting WCAG 2, not 2.1.

There’s still a lot of things in 2.1 that you really need to meet if you want to make your sites accessible to mobile. It’s very clear in the methodology, by the way, when it’s a WCAG 2.1 requirement. In some cases, we disagreed with the kind of Benicia of the success criterion. We make that very clear and we explain why we disagree. One of the things we disagreed on, for example, is that on a mobile device, you should not have any content that requires horizontal scrolling.

It’s a portrait device. You don’t want anything that’s basically overlapping on the side of the device, so you’d have to scroll to get to it. There are exceptions to that. The exceptions WCAG 2.1 allows for are images and videos and tables. We don’t allow those exceptions because we think from an accessibility perspective that it is something that’s actually essential for people with disabilities.

Now, I’ve told you all about them. Where can you find them? You can download the guidelines from the AccessibilityOz website. You might say, “Well, hold on. I thought this was like a bipartisan development of guidelines. Why are they sitting on the AccessibilityOz website?” They’re sitting on the AccessibilityOz website for one reason only. I keep finding typos. [chuckles] Chris Law, who runs the ICT Accessibility Testing Symposium website, got sick of updating the documents. He said, “Just put them on your website. We’ll link to you.”

You can access them if you go to the AccessibilityOz website. You go to the Resources main button and then you go to Mobile Testing. There’s an introduction note on WCAG 2.1, mobile sites versus native apps, a note on hybrid native apps, and then there’s the methodology. The mobile site and the native app testing methodology. These are expandable. If you want a more detailed overview, you can get the overview here. If you don’t, you can just download the documents.

Each guideline has five documents. The first one is the methodology. Think of the methodology as being the equivalent of WCAG 2. Then there’s the About section, which is talking about how you choose devices, assistive technologies, site types and variations for the mobile site, how you capture errors, which is, of course, very important on native app testing. Then you have three sets of documents that have basically the methodology in a lot more detail.

You’ve got critical test cases and assistive technology and mobile feature test cases. Each one of these documents is very detailed. This is a native app accessibility testing document. You can see, there are a lot of pages. Let me just get to something that’s useful. We have this test. This is one of the iOS test cases on VoiceOver. It has the methodology requirement. It has about this requirement, some information about how to test, and then it has some example passes as well. There’s a lot of information in these documents. They are Word documents. One day, hopefully, they’ll be on the website, but not yet.

Let’s talk about mobile accessibility features. It’s really important to know that mobile is very different to desktop. One of the reasons that is not just because it’s a small screen size, look at it in portrait instead of landscape, but there are native screen readers. TalkBack on Android, VoiceOver on iOS, Voice Assistant on Samsung, et cetera. Now, there is a native screen reader to Mac devices, but not PCs. People who use PCs might use NVDA. They might use ZoomText. They might use JAWS. You don’t know what people are using. With the mobile devices, you do.

There’s also things like volume control, haptic keyboard, visual, auditory, vibrational notifications, screen rotations, mono audio, voice control, increase text and display size, reduction of motion, Zoom, and reader view and simplified view on websites. Now, all of these things or most of these things are available on PC and Mac devices, but the thing is, is that most people don’t know about them, whereas people are much more aware of them on mobile devices. I’m not quite sure why that is.

As an example, my stepmother, she’s getting old. She does a lot of reading, but she can’t read normal textbooks anymore. She reads all her text, all her books, all her articles, everything, all her newspapers on her iPhone because she can increase text significantly. Now, she would never think of doing that on her computer, but it was something that she figured out by herself. It’s something that we find is that these accessibility features are much more commonly used on mobile devices than on PCs. If something breaks, you’re going to annoy a whole lot more users.

>> STEVE JONES: This episode of Accessibility Craft is sponsored by Equalize Digital Accessibility Checker, the WordPress plugin that helps you find accessibility problems before you hit publish. 

A WordPress native tool, Accessibility Checker provides reports directly on the post edit screen. Reports are comprehensive enough for an accessibility professional or developer, but easy enough for a content creator to understand. 

Accessibility Checker is an ideal tool to audit existing WordPress websites find, accessibility problems during new builds, or monitor accessibility and remind content creators of accessibility best practices on an ongoing basis. Scans run on your server, so there are no per page fees or external API connections. GDPR and privacy compliant, real time accessibility scanning. 

Scan unlimited posts and pages with Accessibility Checker free. Upgrade to a paid version of Accessibility Checker to scan custom post types and password protected sites, view site wide open issue reports and more.

Download Accessibility Checker free today at equalizedigital.com/accessibility-checker. Use coupon code accessibilitycraft to save 10% on any paid plan.

>> GIAN: Another thing I want to talk about when it comes to mobile issues is the whole concept of page variations. As part of WCAG 2, you need to zoom to 200%, which should already be included in your regular testing. What you tend to see is that if you zoom something on your PC to 200%, you end up getting the mobile view of the website. What tends to happen is if the mobile view of the website or the mobile page variation is different to the desktop variation, then those people who have low vision are restricted basically to the mobile version. If that mobile version has different content or less content, they can’t access all the things that they want to.

Therefore, it is essential that functionality is not removed due to a variation in the page. Now, if you’re like, “I didn’t get that,” this is an example. I just want to say, this is fixed years ago, but it’s just such a perfect example. This is YouTube at 100%. You can see in the top right-hand corner, you’ve got an Upload button where you can upload videos and your notifications. Previously, as I say, has been fixed.

At 200%, those upload and notifications disappear. Now, all of a sudden, you can’t upload videos. You can’t access your notifications. Now, why would this happen? Because YouTube assumes that if you’re looking at the page at this size, you’ll be doing so on a mobile device. They don’t want you to upload your videos through their website. They want you to upload your videos through the native app. That’s why they remove it. Of course, for someone who has low vision, who has increased the text size using their browser, all of a sudden, they can’t use YouTube.

Another thing to be aware of is this concept of accessibility supported, which is a conformance requirement, which states that implementation techniques that support assistive technologies used on mobile devices, such as TalkBack, VoiceOver, and Switch Control should be supported. Basically, the short version means that you really need to test with assistive technologies, especially on native apps, in a way that you haven’t needed to do testing on desktop websites.

Now, I am a big believer that the only people that should be testing with assistive technologies are the people who require those assistive technologies. We have someone who’s legally blind and uses a screen reader and he does 99% of our screen reader testing. Now, I will do screen reader testing if it’s something that he can’t access due to location or whatever, but his testing will always be superior to my testing because he uses it on a daily basis. He has an expertise level of knowledge that I will never reach. He also uses it differently to how I would use it because I have a concept of how websites tend to operate.

The other thing about WCAG 2 that maybe not many people know is the concept of WCAG 2 was technology-neutral, which was you should be able to meet all of the WCAG 2 requirements and not have to test with assistive technologies. We didn’t, when we wrote WCAG 2, want people to test with JAWS and test with NVDA, although I’m not sure NVDA was there back then, and VoiceOver and go, “Oh, it doesn’t work in JAWS, so we have to fix this thing,” which might then break it in VoiceOver. We wanted to include all the requirements in WCAG 2 and then get those assistive technology manufacturers to make their systems work with accessible sites.

Now, this doesn’t work so well with mobile sites and native apps because, with native apps, you can’t access the code. There’s also, once again, these assistive technologies that people use a whole lot more than they use on desktops. It is something that you want to do from maybe a usability standpoint. There’s quite a few assistive technologies that we recommend that you test with.

On Android, they include TalkBack, keyboard, keyboard and switch, magnification, remove animations, color inversion, grayscale, color correction, increase display size, increase font size, and with Chrome, increase text size, and simplified view. On iPhone and also iPad, we recommend VoiceOver, keyboard, keyboard and switch, Zoom, reduce motion, invert colors, grayscale, larger text, which is native apps only, and reader view, which is on Safari only, and reader view and increase text size, which is also on Safari only.

As I said, the iPad ones are all the same. If you look at the switch, I have no idea how to test with the switch. Well, I can tell you, I had no idea how to test with the switch either. We actually got someone who was reliant on a switch to look at our guidelines and recommend the how-to-test feature. There’s lots of information in those documents on how to do these things.

Testing methods. There are basically four testing methods for mobile sites and two for native apps. The four main testing methods in mobile testing is testing on devices, which is on mobile and tablet device, devices with assistive technology, which is basically a mobile or tablet device with an assistive technology or mobile feature, a responsive window, which is basically a responsively-sized window on desktop, and desktop.

There’s only two testing methods for native apps, that is devices and devices with assistive technologies. As a committee, we strongly, strongly, can I say “strongly,” recommend that you do not use simulators. Simulators are not a good representation of sites when they’re out in the real world or on an actual device. This is one example. This is from back when I was in 2014 and there was no Wi-Fi on airplanes. It’s a 15, 16-hour flight from Melbourne to LAX, which is a long time to be without internet.

Basically, what happens is you get to LAX and they often don’t have a gate because they don’t know the wing direction and stuff. They’re not 100% sure when the plane will land. There are times that we’re sat on the tarmac for an hour or two. The good thing is if you’re sitting on the tarmac, you can access the LAX Wi-Fi. However, this Wi-Fi, once you’re connected, says, “This page will redirect,” so content doesn’t really make sense to have here. All of a sudden, I don’t feel safe that I’ve given you my contact details, right?

Now, I have a really detailed seminar on this called, Mobile Accessibility: The Good, The Bad, The Ugly, of all the things that can go wrong when people don’t test properly. People use simulators. How did this happen? This happened because no one actually tested in the actual location. They tested. You may notice also, the text is really small. That happens quite a lot. That happens because people are testing on massive screens. They’re not testing on devices that are this big.

That’s why you must use simulators. When it comes to assistive technologies, there are some simulators that say they can simulate the assistive technology. They can’t. Don’t believe them. There are always things that will trip them up. If the one thing you take from me today is this. It is test with real devices. You only need an iPhone and you only need an Android phone. If you’ve got a testing team of two, you’ve probably got them already.

Okay, so let’s talk about the actual methodologies. The methodologies have the same five steps. Of course, what you do underneath them is slightly different, with the exception of Step 2. With mobile site, Step 1 is identify devices. Step 2 is identify site type and variations. Step 3 is test critical issues. Step 4 is test mobile-specific issues. Step 5 is test mobile assistive technology and feature support. The native app testing methodology overview is exactly the same, except Step 2 is define application functionality.

Let’s talk about Step 1. The first thing you need to do is to identify the devices that you should test on. Short version is Android phone, iPhone, but there are some things worth thinking about. In the United States, Australia, UK, other Western countries, iOS devices are most popular. However, in Asia and other Eastern countries, Android devices are most popular. It’s best to have a look at your Google Analytics or any other analytics system to see what is popular on your site. I’m not saying that means you can get away with not testing the other mobile device, but it might be you test first on Android and then you test on iPhone if Android is most popular.

The other thing to be aware of is due to the popularity of the Android system, there are tens of thousands of Android operating systems and browser combinations available. It’s not possible to test on all these systems. You need to really test on the latest version, Android phone running the latest version of Android. If it’s a website, test with Chrome. Now, Chrome is the best representation of what you’ll see across the majority of Android devices. Do not test with the internet browser that comes pre-packaged with Samsung phones as it is very dependent on the Samsung operating system. Unless the majority of your users or you have quite a lot of users that use the Samsung phone and the internet browser, then you might want to test with that as well.

The other thing to remember is even if the site is a desktop site and you’re like, “This is a desktop site. No one will ever use it on a mobile phone,” people will still use it on a mobile phone. Please be aware of that. Then talking about devices with assistive technologies, it’s important to remember that if you have an assistive technology that works on desktop and on mobile, say VoiceOver, they’re going to behave differently on desktop versus mobile because the code is different.

You need to test on mobile and desktop even if it’s the same assistive technology. You can have a look at the WebAIM screen reader survey to have a look at the latest screen reader usage, but my recommendation is basically VoiceOver for iOS and TalkBack for Android. Samsung does include an additional screen reader because Samsung likes to be different called Voice Assistant. However, TalkBack is still available as part of the accessibility suite. Most people who are expert screen reader users are probably more likely to use TalkBack.

However, once again, if you have a lot of Samsung users, then you might want to consider testing with Voice Assistant. Amazon Fire also has a different screen reader called VoiceView. If you’ve got content that’s going to be seen on Amazon Fire, you need to test with that. Basically, you can, most of the time, test on your iPhone with Safari if it’s a site, iPad with Safari, and Android phone with Chrome.

Now, unfortunately, iPad and iPhone actually have different operating systems, so you do actually have to test on both of them because your content will display differently. You also want to consider testing with an Android tablet with Chrome or alternative devices such as a Kindle device. In terms of operating systems, test on the latest version of iOS and iPadOS. The guidelines require or suggest, recommend the latest two versions of Android.

However, I have found that testing on the latest is sufficient. Also, when a site is aimed directly at people with a particular disability, consider using assistive devices and/or other assistive technologies used by potential users. For example, if it’s a site aimed at people with acquired brain injury, then consider doing some user testing with people who use Dragon, naturally speaking. If, just in case, you forgot, you need to meet WCAG 2 and this methodology.

For the mobile site requirements, Step 2 is identify site type and variations of the page. What do I mean by site type and variations? You need to determine if the site is a desktop site, an m-Dot site, or a responsive site. If the site is responsive, are there variations of the page? Desktop sites are sites that have only one display, whether viewed on a desktop or mobile or tablet device. m-Dot sites have a particular display for mobile and tablet sites.

The m-Dot site is a completely different code base from the website. That means that both the m-Dot site and the www site need to be tested on desktop. The m-Dot site and the www site need to be tested on mobile. It’s twice as much testing. Don’t use that m-Dot site. Then responsive sites that change depending on the screen size or other feature as determined by the developer.

Now, most of the websites are responsive websites. We also recommend that you only change the size or you only change the site dependent on the screen size, not by browser sniffing or anything like that. That way, people can access different versions fairly easily. That’s what is recommended. Now, for native apps, Step 2 is slightly different. Well, significantly different. It’s define application functionality. Through your understanding of the purpose of the native app, define which functionality is critical to its purpose and use and that must be tested for efficacy, operability, and workflow from a user experience perspective.

Now, what does this mean? Basically, if you have the Wells Fargo website and the Wells Fargo banking app, they are completely different things. The Wells Fargo website will have information on bank locations, information on the share price, information on how to apply for a job, information on how to make a complaint, information on home loan interest rates, et cetera, whereas the Wells Fargo banking app is very specific in its functionality, specifically about mobile banking. That’s what we find with native apps is it’s a much more specific set of content.

Ask the question, how would the experience be impacted if the functionality failed, the content could not be reached, and/or the experience caused a barrier to the user? Then prioritize. All functionality should be accessible within the native app. However, it is important to define and include the critical functionality for each individual app to be prioritized in your testing.

Then there will be common elements to test, navigation, landing screens, emergency sections, login flows, settings, account and profile, Contact Us, real-time updates like emergencies, privacy policy, terms and conditions, interactional functionality, help sections, widgets, calendars, et cetera, geolocational maps, and high-traffic areas. Just in case you forgot, you access this under the AccessibilityOz website under the Resources menu.

Step 3 is test critical issues. Critical issues are basically what we call traps. A trap is where a user is trapped within a component and cannot escape without closing the browser or the app. There are many more traps in mobile sites and native apps than on desktop. Although some of the ones that we’re finding on mobile sites, we’re also now finding on desktop.

What do I mean by a trap? On desktop, back in the late 2000s, Firefox had this problem that if you had a video player and you tabbed into the video player, you couldn’t tab out using the keyboard. You couldn’t escape from the video player using the keyboard. The only way for someone who is reliant on a keyboard to get out of that would be to actually close the browser and start again. Now, that is called a trap and it’s one of the reasons why we built OZPlayer.

We identified five different traps that occur on mobile and tablet devices. The exit trap, the swipe/scroll trap, the text-to-speech trap, the headset trap, and the layer trap. The exit trap, the requirement is ensure there is always an accessible actionable item, e.g., a Close button that meets color contrast requirements and has an accessible name, that closes any feature that overlays the current page such as a full-page ad. This applies to all users and it’s in both methodologies.

This is an example of an exit trap. You’re on Facebook and this full-page ad pops up. Now, you can see at the top, there’s a little Back button. That doesn’t do anything. There’s a URL bar, can’t access it. The only way to get out of this is to hit this very small area and then the ad will disappear. You can see, there’s no Close button for the ad, et cetera. That is an exit trap.

This is something I’m sure we’ve seen many times before. It’s a dark UX pattern, but it’s also an exit trap. Basically, you’ve got a Close button. You got a pop-up. It says, “Hey, buy the full sale for The Chronicle.” The dialog box has the Close button, which is great. However, it doesn’t meet color contrast or touch-target size requirements because it’s too small and it’s gray text in the white background. Now, you might say, “Oh, you can just tap outside the dialog box.” Well, for some people, they can’t. They will get to this page and they can’t go any further. That is an exit trap.

Then we have a swipe/scroll trap. Ensure you do not override standard mobile touch functions, swiping, scrolling, et cetera, on the majority of the page. This applies to touch users and its both methodologies. This is what I call “the zoom of doom.” This is a map that’s on the page, takes up almost all the screen. If you scroll anywhere on that map, it scrolls the map. It doesn’t scroll the page up and down. You have to hit these very small areas of white to be able to scroll. That is very difficult. Now, we’re seeing a lot of maps saying, “Use two fingers to scroll,” and things like that. People are finding ways to address this.

Then there’s the text-to-speech trap. If the app has an ability to provide content via text-to-speech, the screen reader user must be able to pause or stop the app speaking in a simple manner, e.g., by performing a swipe on a screen. This applies to screen reader users and it’s the native app methodology. However, we have started to find it on mobile sites. This is Pocket. I think they’ve fixed this actually.

Basically, Pocket saves all these articles that you want to read one day. My Pocket has thousands of articles that I’ll probably never look at. It has an ability to play the article. If you hit play, there’s no easy way to stop the text-to-speech. In order for a screen reader user to stop the text-to-speech, they have to navigate through the page to find a Pause button. Now, that is impossible for them to do because they are listening to the articles speaking. They can’t hear their screen reader audio. That is a text-to-speech trap.

Then we have a headset trap. Headset users must always be able to pause media, audio or video content by using the pause/play control on the headset. This applies to screen reader users and headset users and its both methodologies. This is an example here where you’ve got a little pop-up video from the bottom of a website. If you want to pause it, you can tap on the little Mute button here. Of course, if it’s playing on your headset and perhaps you’re using a screen reader as well, if you press the Pause button on your headset, it pauses the screen reader. It does not pause the audio or the video.

Then there’s a layer trap. The user should not be trapped on a non-visible layer. This is all users, but it’s mostly encountered by screen reader users and sometimes keyboard users, its both methodologies. This is an example here. You’ve got a menu that pops out when you select the Menu button. Even though the menu appears, the keyboard and the screen reader is stuck on the underlying page. That means that the screen reader user won’t get access to any of the menu content. It means the keyboard user can’t see what they’re doing and they can’t access any of the menu content either.

Step 4 is test mobile-specific errors. There are a bunch of categories in Step 4. Alternatives, display, actionable items, navigational aids, audio and video, forms, and mobile and desktop interaction, which is only applicable to mobile sites. In alternatives, we have nine requirements. 2.1, motion, interaction, and gesture. 2.2, touch gestures. 2.3, geolocation. 2.4, change of state. 2.5, audio cues. 2.6, status messages. 2.7, abbreviations. 2.8, summary of content. 2.9, ambiguous text. Let’s see a full example from the document.

Touch gestures, the requirement is any touch gesture must have an alternative, accessible, actionable item. This is very similar to the success criterion in WCAG 2.1 called “2.5.1: pointer gestures.” What’s a touch gesture? Touch gesture is swiping up and down or left and right, dragging up and down or left and right, double-tapping, tap and hold, tap and swipe, two-pinch zoom, and press and long hold.

What are alternative, accessible gestures? They are a link, a button, a dropdown, or a separate page with the same functionality. Then every requirement has an About This Requirement section, which explains why we think it’s important. This requirement is particularly important for screen reader users. For example, if you require your user to swipe right to complete a purchase, when the screen reader is on, the swipe-right gesture moves you to the next focusable item and doesn’t complete the purchase.

You must be able to perform the same action by using a link, an up-or-down swipe, or some other gesture. Please note that this requirement is similar to the exit-trap requirement. A failure of the exit-trap requirement is that a user cannot escape from content or a page. A failure of the touch-gestures requirement is that the user cannot choose content or a page, i.e., they are not trapped.

Then there’s a How to Test section. Identify any site controls. If they require any of the following gestures, is there an accessible actionable item provided as an alternative? Then it lists the gestures. Swiping up and down or left and right or dragging up and down or left and right, double-tapping or two-pinch zoom, tap and hold or tap and swipe, and press and long hold.

This is an example of a pass. You’ve got two rows of content under the Top Stories heading. You can actually see that there is more content visible on swipe as part of the content is cut off. This is a very common way of a visual indicator of additional content. This is another actual requirement of the methodology. However, there is a link at the bottom called “See More.” The same content is shown when this link is activated as the additional content on swipe. It’s shown in a linear order. That way, people can access all the content even if they can’t swipe from right to left.

This is another pass example. When viewing the weather on Google, you can select and drag the slider to determine the weather at certain times during the day. However, you can also tap on the times to move the slider to that specific time. That’s another pass example. Then you have the display category, which has, 3.1, three flashes. 3.2, change on request. 3.3, target size. 3.4, inactive space. 3.5, fixed size containers. 3.6, justified text. 3.7, color contrast. 3.8, orientation. 3.9, animation.

Let’s talk about target size. The requirement in the methodology is that the size of touch targets is at least 44-by-44 CSS pixels, which is approximately 7 to 10 millimeters. This is similar to the WCAG 2.1 AAA success criterion 2.5.5 target size. We say in the methodology, please note that this differs from WCAG 2.1 as success criterion 2.5.5 is a Level AAA requirement. In this methodology, it is a mandatory requirement.

Now, why do we have this requirement? Most people use touch as the form of interaction on mobile and tablet devices. Touch is not as granular as mouse’s interaction and can depend on the size of a person’s fingers. People with certain physical disabilities may also find it difficult to activate very small touch-target sizes. There’s a couple of exclusions that we don’t allow. It has a fairly simple way to test, which is you identify all actionable touch targets. Then you turn on the screen reader and you can then swipe from each actionable touch target. The screen reader will actually outline the actionable item.

Therefore, you can measure it just with your ruler or something like that on your screen to see if it meets the size requirements. This is an example here. When visiting the Airbnb website, this is a long time ago, a pop-up occurs at the top asking if you’d like to install the Airbnb native app. The only way to close this pop-up is to activate very small close item in the top left-hand corner. However, the close item is very small. It doesn’t meet touch target requirements and it actually is much easier to aim for the Exit, Close button, but actually find that you activate the native app. This is not an exit trap as the pop-up doesn’t overload the page.

3.4, inactive space, which is very similar. Actionable items have sufficient inactive space between them. Inactive space of at least 10 pixels should be provided around active elements. Once again, as I said, most people use touch. People with certain physical disabilities may find it difficult to activate the correct item if multiple options are available without a minimum of inactive space. Once again, the How to Test is very similar. You turn on the screen reader, you determine where all the actionable items are, and you measure the space between them.

This is a failure. This is Asana, the Asana native app. In the top right-hand corner, you’ve got the Edit button and the Mark Complete. It’s very simple. It’s very easy and I’ve done it many times to accidentally mark something complete when you meant to edit it. Of course, it doesn’t meet the data retention requirements of WCAG 2. Once you’ve deleted something, it’s gone.

This is another example, Wikipedia. There’s no inactive space between the related articles. That can be problematic. This is a pass. Basically, you can see, there’s enough space between all the actionable items. Then we have the actionable items category. 4.1, content on hover, focus or input. 4.2, native UI. 4.3, descriptive text links. 4.4, non-keyboard options. 4.5, infinite scrolling. 4.6, color alone. 4.7, removal of touch.

4.6, color alone. Color alone should not be used to indicate actionable items if not underlined. A secondary method such as underline or bold should be used in addition to color. This technique’s aimed at visual users only. It relies heavily on the WCAG 2.1 success criterion, 1.4.1, use of color. This success criterion allows exceptions for actionable items that differ from text in color alone if the difference meets color contrast requirements of a 3:1 ratio. WCAG 2.1 does allow that you use color as long as the color contrast is 3:1. Now, the color contrast requirements for everything else is 4.5:1. We decided that wasn’t sufficient.

Mobile devices are, by nature, mobile and used in a variety of environments, including full sun and full darkness. This means that color differences that may be obvious to all users in an office environment or in your PC could be very unclear in other locations. In addition to this, actionable items on desktops provide feedback to the user when the user mouses over it. At a minimum, the cursor changes. Sometimes the address appears in the bottom left-hand corner. These are all additional indicators so that you know something is actionable that’s available on a desktop, not available on a mobile. That’s why we decided that we should include this requirement.

We did include an exclusion, which is basically the links that are not inline text, so your menu items and things like that. Navigation bar, button text. They still need to meet color contrast requirements, but they have enough visual information that color alone can be used. As I said, this is aimed at visual users. It’s not aimed at people who use screen readers. Basically, this is an example where links are just blue and the text is black. This would be a failure. This is a pass. You can see, you’ve got links. They’re underlined, so it’s fairly straightforward.

The next category is navigational aids. 5.1, visual indicators. 5.2, character key shortcuts. 5.3, descriptive headings. 5.4, inactivity timeout. 5.5, navigation features. I’ll just show you 5.1, visual indicators. Visual indicators such as arrows, Next and Previous buttons, have been used to indicate swipe or scroll areas or additional functionality. This is very similar to WCAG 2.1 success criterion 2.5.1, pointer gestures.

Basically, we talked before about when using a mobile device, there’s less feedback provided to the user as to the functionality as actionable devices and things like that. As a result, it’s really important to provide a visual indicator of the functionality. Basically, this is an example here where if you swipe– This is a BBC app. If you swipe from right to left, it actually moves to the next tab. You’re currently on Top Stories. You swipe from right to left and you get to My News. There’s no indicator that a swipe has that functionality. I found it by accident.

This is another fail here. You’ve got your days of the week. At the top, to access the Next Week, you swipe once again from right to left or left to right if you want to go to previous weeks. This is, once again, something that doesn’t have a visual indicator and this is something that does. You can see that the images are cut off on the right-hand side of the mobile. This is a very common way to indicate visually that if you swipe from right to left, there is additional content. Then we have audio and video. There are three requirements: 6.1: Transcript. 6.2: Captions. 6.3: Live audio and video. The transcript requirement is that all video and audio have an accessible transcript. How does this differ from WCAG 2, you may say? WCAG 2.1, at Level A, requires that you have audio descriptions or a transcript and captions. At AA, it requires you to have audio descriptions. There are some sites that can comply with WCAG 2.1 Level AA by providing captions and audio descriptions for their content and not a transcript. That would mean that the site was compliant with the WCAG 2 video and audio requirements.

Captions, of course, are essential to people who are deaf or hard of hearing. Audio descriptions are essential for people with visual impairments. However, there are groups of people who have difficulty with both, and those are people who are deafblind, so having a transcript at that point is essential. The transcript must include both the audio and the visual content of the video. It’s not sufficient to have a transcript of only speech.

This is a fail. This is Nancy Drew, The Curse of the Dark Storm. This is a CW app, and there’s a little description, tiny, tiny, tiny text, but there’s no transcript to the video or audio file.

This is another failure. This does have a transcript, but the transcript is only of the speech. It’s not any transcript of what you see on the screen.

This is an example. I dislocated my toes a few years back, and I had to do all these rehab exercises. This is a video that shows the exercise and is accompanied by a text transcript that accurately describes the entire video. The text is, “Sit in a chair with your feet flat on the floor, take a deep breath in and lift your toes up, hold the position, then relax.” You could actually complete that exercise without seeing the video.

The next category is Forms. There are seven requirements:

7.1: CAPTCHAs – basically, we say, “Don’t use CAPTCHAs.” 7.2: Context-sensitive help. 7.3: Error prevention. 7.4: Positioned field labels. 7.5: Visible field labels. 7.6: Accessible name, and 7.7: Form and keyboard interaction.

7.4 is position field labels. Field labels are positioned adjacent to their input field and appear closest to their respective input fields in relation to other field labels and other input fields. The mobile device has a small screen, so if field labels are positioned far away from their respective input fields, it is possible they won’t appear on the screen at the same time as the field they describe. In addition, where a field label is positioned close to another input field instead of its own input field, it can be unclear to some users which field the field label describes.

This is an example here of a site where you’ve got a Yes button- a radio button- and a No radio button. The radio button that is selected is actually the Yes radio button, but it’s closer to the No radio button. Now, this is made much worse because the No radio button, which is unselected, doesn’t meet color contrast requirements so you can’t really see it either. This is a pass where you’ve got your field labels and then you’ve got your field. That’s much more sufficient.

Lastly, we have mobile and desktop interaction. There are two requirements: 8.1: Consistency and 8.2: Linking between types of the sites. Links between different types of the site; desktop, m.dot, and/or responsive, have been provided where the site is not solely a responsive site. Basically, WCAG 2 clearly states that all functionality be available on all different variations of a responsive site. However, in some cases, an m.dot site is created specifically to provide limited functionality. For example, a map of the campus or something like that. In these cases, it’s adequate to provide limited functionality on the m.dot site, as long as the user can easily move to the desktop or responsive site to access all content and functionality.

Now, there’s also another reason, which is that sometimes your desktop site is easier to use, or your mobile site’s easier to use, or someone’s concerned that they’re missing content, which they shouldn’t be, but they might be concerned so they might want to check out the desktop site. Maybe the desktop site is easier for them to use. This is a fail. You can see here, this is the Metacritic website. You’ve got a “see full site” link at the bottom. You can link from the mobile site to the desktop site, but you can’t link back again.

This is also a fail– Sorry, this is a pass. I’ve got to fix that slide. Sorry about that. Basically, this is the IMD website, and it has a “view full site” link, and it takes you to the full site, which is the desktop site. You can also then go back to the IMDB mobile site. Unfortunately, this feature has been removed.

Then step five is test mobile assistive technology and feature support. All the requirements basically are, “All actual items and content can be accessed and activated by the following assistive technologies, all when the following feature is enabled.” Then we have all those assistive technologies again. On Android, that’s TalkBack, Keyboard, Keyboard and switch, Magnification, Remove animations, Color inversion, Grayscale, Color correction, Increase display size, Increase font size. Then, with Chrome, it’s Increase text size and Simplify view.

Then, with iPhone and iPad, it’s VoiceOver, Keyboard, Keyboard and switch, Zoom, Reduce Motion, Invert Colors, Grayscale, Larger text on native app only, and Reader view and Reader View and increase text size only on the sites. The iPads are the same.

I’m just going to show you an example of VoiceOver. Basically, the requirement is, “All actionable items and content can be accessed and activated by VoiceOver on iOS.” This is an example here where you’ve got your social media links, Facebook, Twitter, YouTube, and Pinterest; VoiceOver just reads it as link, link, link, link, because they do not have accessible names.

This is TalkBack Android. This is a little bit more complicated, but these are the kind of things you need to look for. Basically, this is a crossword, and if you activate the clue, you would expect that it would take you to that position on the crossword so you could fill it out, but if you activate the clue, all it does is swap your position from horizontal to vertical.

I know that’s a lot of information, but I promise you everything that I have said is in those Word documents. Let’s talk about what’s next. We have updates from WCAG 2.2. There are two additional Level A success criteria that are:

– 3.2.6: Consistent help – providing consistent help mechanisms.

– 3.3.9: Redundant Entry – information previously entered is auto-populated.

There are four additional AA success criteria:

– 2.4.12: Focus Not Obscured – the current keyboard focus is not hidden.

– 2.5.7: Dragging Movements – dragging has an accessible alternative.

– 2.5.8: Target Size (Minimum). Sounds familiar, doesn’t it? Targets have an adequate size.

– 3.3.7: Accessible Authentication – providing easy methods of authentication.

AAA:

– 2.4.12: Focus Not Obscured (Enhanced) – the entire keyboard focus is visible.

– 2.4.13: Focus Appearance – ensuring the keyboard focus indicator is visible to all.

– 3.3.8: Accessible Authentication (Enhanced) – providing easy methods of authentication.

Now, WCAG 2.2 was released in October 2023, and it hasn’t been endorsed by most governments yet, so this is fairly new.

There are also additional assistive technologies and mobile features that we should include, including Voice Control, increasing text size and font color from Reader View, and testing with a mouse.

We also need to review existing test cases. There are some cases that need review, such as Text-to-Speech traps that only were in the native app methodology and started to occur on mobile sites. The mobile and desktop consistency is great, but we need to have consistency between websites and the native apps as well. If you’ve got an Asana website, it needs to be consistent with the Asana native app.

We also need to remove some assistive technologies and mobile features. What we find is:

– Grayscale in iOS and Android.

– Colour correction in Android.

– Colour inversion in Android, and Invert Colors in iOS always fail together or pass together, so you only need to test one thing.

– Reduce Motion and Remove Animations; so Reduce Motion in iOS, Remove Animations in Android, also always fail together or pass together.

– Classic Invert doesn’t provide any additional testing information to Smart Invert.

Lastly, we need to create an online resource. Currently, these are Word documents, but it would be better to be a website that is searchable, easily updated, more likely to be used, and more accessible. I did start this late last year, but then I got sidetracked with my job. There are blog posts that have the first few things available, and I hope to get back on that at some point. Unfortunately paid work takes priority.

Now we are reforming the committee. I’ve been trying to reform the committee for a while. If you would like to join and feel that you don’t have enough information or you don’t know enough about accessibility, I learned so much about mobile accessibility through being on the committee. It was something we all had research to do to find things out. It’s a really good way to learn about accessibility. If you would like to be involved, please email us at mobile23@accessibilityoz.com.

Thank you for coming today. You can access the presentations at accessibilityoz.com/about/conferences. There’s a bunch of free things on our website that you can look up if you’re interested. Now I’m available for questions.

>> AMBER: Thank you. That was really wonderful, Gian. I appreciate all of your expertise and the great examples that you showed of passes and fails. It actually reminded me before I go into questions, I feel like you should add this to your list– Hold on, I’ll share my screen.

>> GIAN: Oh yes, definitely. I’m always looking for examples. I had the best time finding the examples. It was just so much fun.

>> AMBER: I’ll put a link to this in the chat also, but I’ll describe it just in case anyone can’t– This came up in our conversation on one of our podcast episodes. My husband Chris, was talking about how he would transfer things between our bank accounts, and apparently in the mobile app for Capital One, this is our bank, you have to slide across the button to confirm the transfer.

>> GIAN: Oh, yes. That’s terrible.

>> AMBER: I was like, “I wonder–” We were talking about it and I asked him if he’s ever tried to see if there’s an alternative or tested it to find out if there’s a different way. I don’t know. I found this in their help documentation where it’s explaining how to complete your transfer of your money between your two accounts. As you were talking, I was like, this sounds exactly like a failure that probably needs to be addressed.

>> GIAN: Yes, it definitely is. Please, please send it to me. I’ll add it to the document and I’ll remove your account numbers and things like that.

>> AMBER: Here’s the link directly to that image on Capital One’s website.

>> GIAN: Excellent. Thank you. There are so many things like that that you just– I do find it fun finding them all.

>> AMBER: I’ll go through the questions. If anyone has any and you want to add them to the Q&A, then I can pass them along. I also wrote down a few. I am curious– We had one about touch targets and then I have a follow-up about WCAG 2.2. This person had asked, “Would the touch target pass if the touchable area is 44 by 44 pixels, but the icon itself is smaller?”

>> GIAN: Oh, that’s a great question. Technically it would pass, but that’s actually something that we wanted to talk about in the next iteration of the guidelines because of this whole concept of inactive space, I showed you the example of Wikipedia where it met the touch target size, but there was no inactive space between it. I thought, “Well, the touch target sizes are sufficient enough, so we don’t really need inactive space,” but I’ve had so many people come back and say inactive space is so important.

Why do I talk about that? The reason why is if someone looks at something and goes, “That’s too small for me to touch,” even though they could, then it could be a problem. Maybe that’s something we need to define better in the guidelines. It comes back even to the slide to transfer your money. You said, “Maybe there is an accessible way.” Well, maybe there is an accessible way, but if no one can find it, then it doesn’t matter.

>> AMBER: The words literally on the buttons, say “slide” right? They might not even try to see, “Can I just tap it”?

>> GIAN: Tap it, yes. I did this presentation at CSUN years ago on the accessibility of Facebook. They didn’t have alt attribute images. I had someone come up to me afterwards and say, “I work at Facebook. We’ve just instituted alt attributes on images.” I said, “Oh, I looked everywhere and I couldn’t find anything.” He said, “Oh yes, it’s really hard to find.” I was like, “Well, if it’s really hard to find, then it might as well be inaccessible.”

>> AMBER: Not exist.

>> GIAN: Yes, exactly. That’s definitely something to take into account.

>> AMBER: I think the thought that I had, Let’s say you’re linking your social media icons, I don’t think it necessarily means that the icons themselves have to be 44 by 44, but if you had a 2-pixel border around the edge of the target, maybe the icon in there could be smaller, but now it looks like a button or something, right? It tells someone there is more space. I think there are ways to communicate that without necessarily meaning every icon you have is massive.

>> GIAN: Has to be massive, yes. Absolutely, a border would absolutely be sufficient.

>> AMBER: The follow-up question that I have for you is WCAG 2.2 introduced that AA guideline, which says that it passes if it’s 24 by 24 pixels.

>> GIAN: Yes, I know.

>> AMBER: I’m curious how you feel about that. Is it big enough?

>> GIAN: Uh, no. [chuckles] We talked a lot about what size to make the minimum target size. The most comprehensive mobile guidelines that were published before ours were the BBC Mobile Accessibility Guidelines, and that’s what we went with, which is the 44. We talked to a lot of people, we tested it a lot of times, 24 is not sufficient. But I kind of feel like they felt like– I don’t know, they [inaudible] [crosstalk]–

>> AMBER: They were trying to appease designers or something.

>> GIAN: Yes, something like that. When I was on the working group, there was definitely a, “Oh, but if we do that, then the designers will just throw accessibility out the window because they can’t do whatever they want to do.” Basically, I do find designers often are terrified of accessibility, but it’s not that hard to meet accessibility requirements. As long as you don’t have gray text on a white background. I don’t know why designers love gray text on a white background. I do think 24 is too small, but it’s something we’ll discuss when we reform the committee.

>> AMBER: Yes, cool. FJ asks, FJ says, “Curious about references to tap and pinch, as this seems to assume that end users have the ability to do so. Can you speak to alternative ways people with physical disabilities with fingers or hands use mobile devices?”

>> GIAN: Yes. There’s a feature in the iPhone, I’ve forgotten what it’s called, but it basically allows users to mimic those features using taps. Just like the tab button is mimicked by swiping from right to left using VoiceOver, there is a specific mobile feature that allows you to pinch zoom by using different easy, accessible features. Pinch, I don’t think is particularly accessible so that’s something that I would definitely want to have an alternative for. Absolutely, there are accessible versions of–

If someone who is quadriplegic, if they use voice dontrol, they could activate an item without tapping by saying, “Activate the search bar,” or something like that.

>> AMBER: Yes, or they might be able to say, “Swipe,” right, and that would work?

>> GIAN: Yes, exactly. Yes.

>> AMBER: I keep trying to find somebody who can come talk to us all about voice control and making sure things are accessible for that. I think a big thing with this to think about is if your accessible name matches the visible text on elements or things. That’s another thing that I wondered on that button that says Slide. I wonder if it actually has an aria label on it that’s like Complete Transfer or something, right?

>> GIAN: I actually wrote this article, I’ve put it in the chat, called iOS Voice Control saves the day. I will share my screen because the funniest thing is– It really is helpful to determine what the accessible names are, especially on native apps without having to use a screen reader or something. I found things like this, like the accessible name for live, L-I-V-E, is L-Y-V-E. The accessible name for the magnifier button for the search bar is a kiss. The accessible name for the back button is a rose. There are easy ways to turn it on and things like that. That’s definitely something to have a look at. Voice control is–

>> AMBER: Wait, hold on. Is that good?

>> GIAN: What do you mean?

>> AMBER: I’m totally thrown off. Those are mistakes or is that intentional?

>> GIAN: Those are mistakes. Yes. Those are mistakes.

>> AMBER: Okay. For a second I thought you were saying that was good and I was confused.

>> GIAN: No. No. No, no, no.

>> AMBER: Those are mistakes.

>> GIAN: These are ones that are good. Videos of the day, videos of the day, popular, my new, et cetera. This is just very, very strange. Voice control is really good. It came out about three months before we released the guidelines. We were like, “Should we include it?” We thought, “No, we’ve already got enough work.” It’s definitely something that needs to be included.

>> AMBER: Great. I actually had a second question, which is, can you recommend more alternative verbiage to phrases like “see more” or “read more”? Phrases that seem to assume people are sighted. FJ tends to prefer more content or simply more about article/topic, whatever that might be. Do you have any thoughts about that?

>> GIAN: People who are blind or vision impaired, they don’t tend to care if you use visual language like that; “see more” or “read more” or things like that. The one thing you need to be careful of is the use of the word “click” because click implies that you can only use a mouse, that it’s not accessible via the keyboard. “See more” is fine, no one’s going to get offended by that. However, you need to have links that are descriptive, and that’s actually a requirement in the mobile methodology that the link is descriptive outside of its content.

WCAG 2 AA allows that links can be descriptive in context. You could say, “Click here to download our report,” and the Click Here is the link. I think that’s terrible. Or, “See more to learn about our courses,” and the See More is a link. Our methodology would say that that link is not descriptive because it’s not descriptive by itself. I would suggest you’d say, “Read more about the iOS voice control. Read more about how you can save money with–” The whole thing being like– et cetera. That’s what I would recommend.

>> AMBER: One thing we’ve done if we’re writing instructions on how to do something, we’ll say, “Click,” or, “Activate” the Submit button or something like that, so it’s not always saying that it has to be clicked.

>> GIAN: We usually use the word Activate.

>> AMBER: I don’t know if I’ve really encountered– Obviously there are lots of different opinions. I think it’s nice to hear you have a similar experience to me that a lot of blind people don’t necessarily mind if you say like, “See more.” I know sometimes we worry if we’re being too ableist in our language.

>> GIAN: Well, talking about that–

>> AMBER: I have blind friends who tell me they’ll see me later.

>> GIAN: Yes, exactly.

>> AMBER: I’m like, hey, it’s okay.

>> GIAN: I also have an article on that called Communicating with People with Disabilities. It talks about that in a lot of detail, specifically around running user testing and things like that. I’ll put it in the chat as well.

>> AMBER: Thank you. We’d appreciate all the great resources.

>> GIAN: The steps are: Speak to the individual, respect their body and property, don’t make assumptions, use natural language, ask, and a note on invisible disabilities.

>> AMBER: Great. I’m going to try and go through the next few questions just quickly because we’re about at time and I want to be respectful of your time and everyone’s time. I’ll try not to chime in too much. We’ll see how we do.

We had a question that says, “How to keep up with the state after leaving a page and redirecting to itself while having the updated data.” I don’t know if you quite understand what that question is getting at.

>> GIAN: I don’t know what that question means. I’m sorry, [inaudible], sorry if I pronounced it incorrectly. If you can just rewrite the question, then I’ll answer that.

>> AMBER: Yes, maybe we’ll circle back if we can get a little bit of extra context on that one. Do you have opinions on WordPress themes that are best to use as a beginner to accessibility?

>> GIAN: Yes. If you search for Accessibility in WordPress themes, 80% of the ones you get will be accessible. That’s what I would recommend. You can also have a look at the AccessibilityOz website. We’ve got a section on our website about the websites we’ve built under Products, Accessible Websites, and they all have an accessible WordPress theme that we created. You’re welcome to use that if you want. I’ll put that in the chat.

>> AMBER: I’m trying to see if I can grab a link. It’s basically what you want to search for on WordPress.org if you’re looking for free themes is the Accessibility Ready. There’s a feature filter there, which is, I think, a good place to start.

>> GIAN: Excellent.

>> AMBER: Natalie asks, “What is the best way for those new to accessibility to learn how to design or build WordPress sites based best on WCAG?” Do you have some favorite learning resources that you like?

>> GIAN: Yes, I wrote them.

[laughter]

>> GIAN: The best thing I would recommend is we have these fact sheets on all these categories: images, PDF, video, interactive maps, HTML5, content, JavaScript tables, coding, keyboard, source order, forms, and mobile. They have really detailed content. You’ve got principles on why you’d want to make something accessible, impact on users. There’s what’s called a manager checklist, which is basically a testing checklist. The developer checklist is very detailed. Each one has a correct example. This particular requirement, which is visually dynamic information, such as a progress meter, should have a text equivalent. Here’s the code. You can actually see a live demo. You can grab all this because it’s all under creative [inaudible]. That’s where I would start. That’s why it was created.

There’s also under Resources, the CCC developer videos. I can’t remember how many, 16 videos, I think, on various different things. They’re all under five minutes on forms, HTML, and aria. That’s what I would recommend as well.

The WebAIM mailing list, I’ll put a link in the chat, is also very helpful. There’s a bunch of different free webinars that you can get from WordPress and all those different things. A lot of conferences release their videos for free after the conference. That’s something else to look out for too.

>> AMBER: Yes, I’ll put a link actually to WordPress Accessibility Day on our past event. You can go to each of the past events and all of the videos there. There’s a lot that’s WordPress-specific there, which I would definitely recommend. Of course, I mentioned at the beginning, but you can get all the meetup recordings off of our website as well. That’s a good place to start if you’re looking for WordPress stuff.

Erin is wondering, “Can you talk briefly about the orientation requirement in WCAG and your experience with that in the context of native mobile apps?”

>> GIAN: I’m not sure if I am answering or understanding correctly, but basically we have three requirements in the methodology. One is the ability to move between portrait and landscape and the other is that the system operates correctly in both portrait and landscape. Then the other is that it doesn’t swap unexpectedly from one orientation to another. That’s what I would– It doesn’t matter if you need to restart the native app for the orientation change to take effect. That’s another thing worth mentioning. Does that answer your question? Let me know if it doesn’t answer your question.

>> AMBER: Yes, I’m guessing that might answer his question. I think like on a website context, it’s just making sure it’s responsive, so in the mobile app, it’s a similar thing, right? That if you rotate your phone, it’s still functioning.

>> GIAN: Orientation– it works that way, it works that way, that kind of stuff. Yes, that’s what the orientation requirements are in the methodology.

>> AMBER: Okay, Tej asked, “What would be the best strategy or technique for displaying information or navigation for interactive maps on mobile with embedded content specifically within iframes?”

>> GIAN: I don’t know if I know how to answer that. I am not a developer. I’m not sure how to answer that, unfortunately. Tej, if you can send me a link to an example, I can definitely tell you what’s wrong with it and what’s right with it.

>> AMBER: I think one thing in general that I’ve learned from a lot of our users when we do user testing is that if we can avoid iframes, that is really helpful because iframes can cause some challenges for screen reader users. Maps always are though, if they’re embedded.

>> GIAN: Screen readers on desktop, they usually handle iframes if they’re coded properly. I can imagine iframes on mobile devices are just a really bad idea. I can imagine they don’t resize properly and all that kind of stuff. I don’t know if I’ve actually come across anything other than a YouTube iframe on a mobile device. Unfortunately, I can’t answer that question.

>> AMBER: You know what I do think of when I see this question too is when they’re talking about interactive maps, I’m thinking about like, perhaps if you went to the Airbnb website, and there’s a map where you can click on all of the different Airbnbs. I wonder if the solution here is, maybe if you do just have a website, it’s better to have a mobile app so that you have more control over how that map functions on mobile and then try to get people– like offer them, like, “Do you want to use the app instead?” when they go there. Give them that choice.

>> GIAN: I actually think that it’s easier to make content, even interactive maps accessible using HTML and using native app code. HTML has so many accessibility features built in. It’s easier to test, it’s easier to fix, it’s easier to access. I would say that you should be able to create an accessible interactive map on a website fairly easily. I know Google Maps has– I’m sorry, I just saw Eagle-Eagle saying something and it’s just really distracted me. I would have said that, Yes, you should be able to do that.

>> AMBER: 100%.

>> GIAN: I absolutely have to address Eagle-Eagle.

>> AMBER: Okay, yes, do you want to read what you got distracted by?

>> GIAN: Yes, Eagle-Eagle says, “disabilityrightsWA.org seems to have an accessibility overlay.” Oh my God. “We’re told this is not best practice.” We stopped working with them probably 6 to 12 months ago. It was the first that I have heard of them having an overlay and I am very disappointed in them. Absolutely do not use overlays. I’m going to check it now and see which one it is.

>> AMBER: I didn’t see it, but also I’m [inaudible]. Oh wait, there’s a heading for Accessibility Toolbar. Oh no, now it’s talking to me.

>> GIAN: I wonder if this is–

>> AMBER: It’s Recite Me.

>> GIAN: Oh no, it’s Recite Me. Look, Recite Me is not terrible. It’s not an overlay, as such.

>> AMBER: It’s more of a toolbar than a JavaScript fixer, right?

>> GIAN: Yes, it’s not an accessiBe. Recite Me, BrowseAloud, ReadSpeaker, they’re all meant to make it easier for people with disabilities to access accessibility features and things like that. It’s not an overlay per se. They have actually written a very comprehensive document about how these toolbars are not overlays and they should not be mistaken as overlays because overlays are bad. That accessibility toolbar is definitely– I don’t know how accessible it is. I haven’t tested it, but it’s not an overlay, thank God. The problem with finishing up with a client and then suddenly, all of a sudden, it’s like you weren’t even there.

>> AMBER: I have a final question and then I’ll let you go. When you were talking about testing and testing on all the devices, I think one of the things that came to mind for me is some of the challenge that we have, especially when we’re working with smaller clients, is testing budget. If we’re talking about, okay, we already know that we test on Mac and PC and we test with NVDA and JAWS and VoiceOver, and now we’re talking about needing to test all of the different mobile devices, iPad and iPhone and Android devices and all of that kind of stuff, I can envision that for a lot of organizations, there becomes a budgetary challenge to that. I’m curious, how do we weigh all of this for those clients? What thoughts do you have about making mobile accessibility testing as affordable as possible for smaller organizations?

>> GIAN: The thing is that you need to realize that when you’re building a website– or the client needs to realize, that you’re actually building two. Even if it’s a responsive website, you’re building a website that looks good on desktop and you’re building a website that looks good on mobile. That’s going to engender additional testing because you are basically creating a second version. You can’t really get away with not testing that content.

I will say from my several years, I can’t believe it’s 2024, several years of using this methodology, that about 85% of all the errors that you find occur on both devices. The assistive technologies are different, but something doesn’t display properly, is overlapping, or things like that, it’s most of the time going to occur on both devices. That’s where I think it’s valuable to look at your popularity and say, “Look, 80% of people using our site on mobile are doing so on iPhones. We have a restricted budget, let’s test it on iPhone.”

The other thing to think about is the accessibility features. We recommend a whole bunch of accessibility features. In reality, you can usually get away with testing with the screen reader and testing with the keyboard and testing with the invert colors and the kind of reader simplified view for the mobile sites. The other things are not so important. If your site is accessible via a keyboard, there’s a 95% probability it’ll be accessible to a switch user. That’s kind of how you can get around it.

We do testing and we now charge extra from our mobile accessibility testing because we found it added 20% to our workload. It just does. It does take time. It’s harder to get to the errors, it’s harder to get to the code. It can be difficult. There is that resource crunch, but you just have to explain to the client that you’re creating another version. That version isn’t going to be magically accessible or magically work. It needs to undergo testing just like everything else.

>> AMBER: Yes, that makes sense. I really appreciate everything that you have shared today. This has been wonderful. Tons of resources. I feel like I’m a little less overwhelmed now than I was when I first started looking at those Word docs. Thank you so much.

>> GIAN: They are rather overwhelming.

>> AMBER: Yes. We’ll have the recording up soon for everyone. Can you let everyone know again if there’s a good social media platform to find you on? I know obviously they can go to accessibilityoz.com, but is there a good place to find you as well?

>> GIAN: Yes. We’re AccessibilityOz on Twitter. I’m not brilliant at checking that. There’s also the AccessibilityOz company on LinkedIn. Probably the best way to reach us is email. I’ll put my email in. Which is gian@accessibilityoz.com.

The other thing I just want to say is I actually suffer from Long COVID so I may be slow to respond to your emails, but I will always respond. Sometimes people are like, “Oh my gosh, I sent that to you eight months ago.” I’m like, “I’m sorry.”

>> AMBER: I feel you on those.

>> GIAN: I do respond eventually.

>> AMBER: All right. Well, thank you so much. Thanks, everyone, who stuck around to the very end. We’ll be back in a couple of weeks with another meetup. Bye.

>> GIAN: See you later.

>> CHRIS: Thanks for listening to Accessibility Craft. If you enjoyed this episode, please subscribe in your podcast app to get notified when future episodes release. You can find Accessibility Craft on Apple podcasts, Google podcasts, Spotify, and more. And if building accessibility awareness is important to you, please consider rating Accessibility Craft five stars on Apple podcasts. Accessibility Craft is produced by Equalize Digital and hosted by Amber Hinds, Chris Hinds, and Steve Jones. Steve Jones composed our theme music. Learn how we helped make 1000s of WordPress websites more accessible at equalizedigital.com.