Listen
In this episode, we discuss the positive and potentially negative ways that AI and accessibility can intersect, from facial recognition and voice assistance to GitHub Copilot and the emerging concept of AI-generated user interfaces.
Mentioned in This Episode
- Poppi Strawberry Lemon
- Face recognition Super Bowl commercial – Google Pixel “Guided Frame”
- Accessibility Has Failed: Try Generative UI = Individualized UX
- Jakob Nielsen’s Bad Ideas about Accessibility
- Researchers developing AI to make the internet more accessible
- Can AI help boost accessibility? These researchers tested it for themselves
- The Tommy Edison Experience YouTube Channel
- What are probiotics and prebiotics?
Transcript
>> CHRIS HINDS: Welcome to Episode 60 of the Accessibility Craft Podcast, where we explore the art of creating accessible websites while trying out interesting craft beverages. This podcast is brought to you by the team at Equalize Digital, a WordPress accessibility company and the proud creators of the Accessibility Checker plugin.
In this episode, we discuss the positive, and potentially negative, ways that AI and accessibility can intersect, from facial recognition and voice assistance to GitHub Copilot and the emerging concept of AI-generated user interfaces.
For show notes and a full transcript, go to accessibilitycraft.com/060. Now, on to the show.
>> AMBER HINDS: Hey, everybody, it’s Amber. I’m here today with Chris.
>> CHRIS: Hello, everyone.
>> AMBER: And Steve.
>> STEVE JONES: Hello.
>> AMBER: We are going to be talking about AI and accessibility while we drink a fun soda today. Do you want to tell us about it, Chris? What are we drinking?
>> CHRIS: Totally admit, the reason that this is here, we’ve got Poppi with strawberry lemon, a prebiotic soda. This was advertised at the 2024 Super Bowl.
>> AMBER: Their $1 million, 30-second investment paid off?
>> CHRIS: Yes, completely.
>> AMBER: Maybe, a tiny bit.
>> CHRIS: Because they got on this podcast, so our millions and millions of subscribers will all go out and buy this and you won’t be able to find it anywhere after today. Amazon will be cleaned out. No. I think it looked interesting. I like the idea, as a soda drinker, of people trying to figure out a formula for a soda that doesn’t have a bunch of artificial additives in it that is as appealing as the artificial stuff that we currently drink. I have yet to find one that convinces me to switch, but we’re going to keep trying. Yes, Steve’s got his Diet Coke ready to go.
>> AMBER: Steve is waving a Poppi strawberry lemon, and a can of Diet Coke.
>> STEVE: Diet Coke, yes.
>> AMBER: you’re going to give us a judge on which one is better.
>> STEVE: Yes, yes. The Diet Coke is my chaser. [laughs]
>> AMBER: Okay. I actually finally looked it up because I am the one who is very critical of these whole healthy sodas and I’m all like, “What’s in it? Why?” I had to google because this says it’s a prebiotic soda and I was like, “What is the difference between prebiotics and probiotics and what does prebiotics even mean? Is it real?” I looked this up and the Mayo Clinic, which I think is reasonable, says that probiotics actually include the microorganisms, like yogurt, right? Like, it has low bacteria in it. A prebiotic does not have any bacteria like a probiotic, but what it has is fiber typically; extra fiber that acts as food for the bacteria that live in your gut, so it helps encourage it because it’s stuff that it likes.
I don’t know, I guess it’s like a high-fiber soda and it says, “Prebiotics for a healthy gut.” It has apple cider vinegar and it’s an immunity sidekick. It has these little icons on the side of it with those notes.
>> STEVE: If you have bad bacteria, it’s food for them too?
>> AMBER: I don’t know.
>> CHRIS: [laughs]
>> AMBER: Hopefully not, [laughs], but yes. The apple cider, I think, is really interesting because that could get a little sour, right?
>> STEVE: Yes.
>> AMBER: Apple cider vinegar?
>> STEVE: Yes, I think so.
>> CHRIS: Yes. Let’s find out.
>> AMBER: All right. We’re going to open it. Oh, it has stevia too. I’m always on the fence about these artificially sweetened things.
>> STEVE: I’m going to pour this into my Accessibility Craft glass today.
>> CHRIS: Is stevia artificially sweetened?
>> STEVE: Yes, it comes from a leaf.
>> CHRIS: It’s natural.
>> AMBER: Okay, fine, but it’s not sugar.
>> STEVE: Okay, okay.
>> AMBER: I just feel like it has a weird aftertaste. Not that I necessarily think it’s bad for you.
>> STEVE: Yes, it does.
>> AMBER: I don’t know if it is. It’s more that, to me, it always tastes off; things that have stevia in them.
>> STEVE: Especially like erythritol. That has a weird–
>> AMBER: This also has cane sugar, so it’s not just– It smells good.
>> STEVE: It does.
>> STEVE: Oh, Chris, you poured it in a glass. I did not. Let me see. It’s kind of a light yellow color.
>> CHRIS: Yes, it’s like an apple juice kind of color.
>> AMBER: Okay, but it’s supposed to be strawberry. I thought it would be pink. I’m going to pour mine in my glass too.
>> CHRIS: Definitely, you can taste the strawberry. The lemon is a little bit more subtle.
>> AMBER: All right.
>> CHRIS: Good acidity, I’m sure that’s the apple cider vinegar. The bubbles aren’t too aggressive. It’s not overly sweet. I would say it’s like off-sweet.
>> AMBER: Yes, I don’t really think it’s sweet at all.
>> CHRIS: It’s definitely not like that mega-intense thing that you get when someone puts a lot of stevia in something.
>> AMBER: Well, or even a lot of sugar. We’ve had pops on this podcast that I was like, “Holy cow, I feel like I’m eating a teaspoon of sugar in my mouth right now.” This one has 4 grams of sugar. I don’t know. How does that compare to your Diet Coke can? What does that say?
>> STEVE: What do you mean?
>> CHRIS: Zero probably.
>> STEVE: There’s zero sugar.
>> AMBER: There’s zero sugar in that?
>> STEVE: Zero.
>> AMBER: There’s zero grams of sugar?
>> CHRIS: Yes, this is a 25 calories, 6% juice.
>> STEVE: Just lots of aspartame.
>> CHRIS: Yes.
>> STEVE: I don’t know. It’s not bad.
>> AMBER: I don’t really get strawberry. It smells strawberry.
>> STEVE: I’m more on the lemon side than the strawberry side.
>> AMBER: It tastes more lemony to you?
>> STEVE: Yes, there’s something– Maybe it’s the apple cider vinegar. I don’t know. Those two confusing–
>> AMBER: I don’t taste strawberry at all, but I do get a little tart lemon. It’s a little bit lighter than a lemon, or a lemon-lime, so there is something there, but it’s not like mega strawberry flavor.
>> STEVE: It’s okay.
>> CHRIS: I’ll keep sipping it while we talk, that’s for sure.
>> AMBER: What is the verdict? You started this out, Chris, by saying you always want to try these, but nothing has convinced you to stop drinking your regular pop yet. Would you buy this again instead of soda if you were in the grocery store walking down the pop aisle?
>> CHRIS: What’s hard is if I really wanted to convince myself this time, I probably should have bought something that is like an actual viable replacement for something that I frequently drink,-
>> AMBER: Oh, like cola flavor.
>> STEVE: -like a cola or a root beer, which they do have those flavors. Tasting this, I’m like, “Yes, it’s pretty good,” but then you have all the cans, right? We’re back to having tons of cans that we’re recycling all the time, whereas I do still like my SodaStream. Maybe if Poppi made some concentrates that could be poured into a SodaStream bottle, maybe I would switch to those.
>> CHRIS: And caffeine.
>> AMBER: Oh, yes. You need the caffeine. It’s caffeine-free?
>> STEVE: Yes. There’s no caffeine in it, that’s true.
>> AMBER: You can buy this to give your kids a healthier soda alternative.
>> STEVE: Well, that’s true, yes. For your next picnic or outing or whatever.
>> AMBER: This was advertised on the Super Bowl and Chris got it, hook, line, and sinker. A good segue into talking about AI and accessibility, did you all see the Super Bowl commercial that Google did for the guided frame feature that they added to Google Pixel phones?
>> STEVE: Yes, yes. I saw it during the Super Bowl.
>> AMBER: For anyone who didn’t see this, we’ll share a link in the show notes. You can go watch it over on YouTube. Basically, they have recognition now, which I guess it’s using facial recognition AI, so that someone who is blind or visually impaired, if they hold their phone up to take a selfie, it will recognize how many people are in the frame and tell them. Of course, the commercial is from the perspective of a gentleman who’s blind. It starts with him taking selfies of himself or with his dog and it would say, “One person in the picture.” Then I guess he gets a girlfriend or a wife, whatever, they become that. Then it goes from one person in the picture to two persons in the picture.
I won’t ruin the commercial for you. You can go look.
>> STEVE: Spoiler alert.
>> AMBER: Spoiler alert, the faces increase. It’s really neat because I feel like a lot of times with AI, and maybe we can talk about this, we talk about the bad ways. We’ve talked about overlays before and how overlays sometimes try to use AI to interpret things and they don’t do a great job. This, I feel, is a really good example of how AI could definitely enhance technology for people with disabilities. I don’t know. I thought it was a neat commercial.
>> STEVE: Yes, it’s neat how like it’s shown from the perspective of the person taking the photo. It’s like somebody that’s visually impaired so it’s blurred.
>> AMBER: Yes, the whole thing is blurred until the photo is taken.
>> STEVE: Then they take a photo and it’s clear. Because the photo is being taken clear, right?
>> AMBER: Yes.
>> STEVE: It’s pretty neat. It’s interesting too, given the time where we are with AI and just the word AI and how popular that is, it reminds me a little bit of when we moved from being the internet to the cloud.
>> AMBER: It’s a buzzword?
>> STEVE: Well, to some degree. I guess if Google had released this feature maybe five years ago, would they have called it AI?
>> AMBER: I don’t know.
>> CHRIS: I’ve done some reading and I feel like I have also read about some apps that are being worked on that do image recognition beyond faces so that you can take a picture of objects around you if you’re on your own, you’re not with anyone, with your phone and it will audio describe what they are or what the AI thinks it is, which I could see being useful maybe in a practical sense if the thing becomes intelligent enough. If you have no other person with you, I can see that being a good thing to have when you’re out and about.
>> AMBER: Actually, one of the other Gaady Award recipients, when we got a Gaady Award for Accessibility Checker, it was an app that was created by I’m pretty sure Procter & Gamble and someone else collaborated. It was an app intended for blind people to be able to go in the grocery store. They could literally take pics as they were– or use their video, so maybe they weren’t taking a photo, but they were using their video and going along and it would help them find objects in the grocery store. I envision you could probably also use it in your home, right? Like you have five rectangular boxes on your shelf, but you want to find a certain type of cereal. It would tell you which kind it was each time. I just feel like those enhancements really probably make a huge difference in people’s lives.
>> STEVE: There was a guy on YouTube called the Blind Film Critic and he goes and he listens to movies and then reviews them. He will show how he uses his phone in life and stuff. He had an app, and this was years ago, an app where he could take pictures of money and it would tell him what bill it was, if it was 20 or 10 or 5. It was really cool. I don’t think they called it AI, but it was essentially AI, right?
>> AMBER: Yes. It’s image recognition. Yes. We recently did an AMA, WordPress Accessibility Meetup. We were kind of like, “We need to do a whole podcast episode on AI because I feel like trying to answer this in five minutes or less is really hard.” One of the questions that came in was about Jakob Nielsen’s recent article. Basically, he says that accessibility has failed and generative UI and individualized UX, so generative user interface that then gets individualized for user experience, is the only way to move forward. I feel like we should talk about this article a little bit here.
I want to start with this whole, he starts off by saying accessibility has failed because it’s too expensive and it’s doomed to create a substandard user experience. What do you guys think about that? What’s your reaction?
>> CHRIS: If I may, I think the idea of saying that we should just call this a failure and abandon the standards is a lot like saying it should no longer be a goal to follow food safety standards because people still get sick at restaurants. Why are we blaming the standards when the real issue is that the standards aren’t being followed? Because I think in most cases, at least in our testing, right, where we take something that wasn’t following any standards and then we bring it up to standard and we bring user testers in to test it, and then they say the thing that follows the standards is a good experience for them, that means the standards do work, they are effective.
[music]
>> STEVE: This episode of Accessibility Craft is sponsored by Equalized Digital Accessibility Checker. The WordPress plugin helps you find accessibility problems before you hit publish. A WordPress native tool, Accessibility Checker presents reports directly on the post-edit screen. Reports are comprehensive enough for an accessibility professional or developer, but easy enough for a content creator to understand.
Accessibility Checker is an ideal tool to audit existing WordPress websites, find accessibility problems during new builds, or monitor accessibility and remind content creators of accessibility best practices on an ongoing basis. Scans run on your server, so there are no per-page fees or external API connections. GDPR and privacy compliant. Real-time accessibility scanning. Scan unlimited posts and pages with Accessibility Checker free. Upgrade to a paid version of Accessibility Checker to scan custom post types and password-protected sites. View site-wide open issue reports and more.
Download Accessibility Checker free today at equalizeddigital.com/accessibility-checker. Use coupon code “Accessibility Craft” to save 10% on any paid plan.
>> AMBER: Honestly, that is a really interesting parallel because I’m sure that lots of restauranteurs or people in that industry would say that following the rules is more expensive than not following the rules, right?
>> STEVE: [inaudible]
>> AMBER: Because, oh, man, this sat out on the counter for more than an hour or whatever the rule is, we have to throw it away. Whereas if you didn’t follow the rules, you just put it back in the refrigerator.
>> CHRIS: Or we don’t have to buy hand sanitizer anymore to sanitize our work surfaces anymore. We don’t have to buy bleach, right?
>> AMBER: Yes, so it is more expensive to follow the rules, but it creates a better outcome when we do.
>> CHRIS: Yes.
>> STEVE: I think I would agree. Like, don’t throw the baby out with the bathwater, right? It’s like it’s failed and we should just scrap it all. Then what, wait for generative UI to get up to speed? Why can’t it be both?
>> AMBER: Let’s be real. I think we’ve talked a lot about even image alt text, and I’ve tested multiple of the WordPress plugins that generate alt text for images, including one that maybe is getting a little bit closer. What is it? I think it’s alt text.ai because it looks at some of the surrounding content. Like, it’ll reference the post title. It’ll reference your Yoast details. I don’t think it reads the whole article. It will do that to maybe try and get a little bit better. None of them really– Every time I try them, they’re just substandard. I’m like, “If we can’t even describe images, how are we going to do what he’s suggesting, which is that a user visits the website and we literally rearrange the whole content for them or we remove sections that we don’t think that they personally will care about?” because we know them. I have a really hard time envisioning that.
>> STEVE: Too, if accessibility’s failed and we just throw out all the accessibility standards and rely on generative UI, whenever that is ready to go. Then we’re reliant on that generative UI to get it right. They have to assess that. I don’t know the input that the end user has to put into the system to make sure that the generative UI actually makes a use case that is tailored totally to them and to their abilities and, and what spectrum that ability. The crossover of abilities between hearing and seeing, this seems like a huge thing.
>> CHRIS: As a perfectly realized concept, right? Like where we kind of ignore all of the potential pitfalls of actually getting to the perfectly realized concept. Generative AI sounds kind of exciting to me personally, the idea that my experience is tailored exactly to me and evolves as I personally evolve, that all sounds great. Personally, as I was sitting here thinking about this, I started to have all sorts of questions.
One of the questions that immediately came to mind for me, and this is because I think, like Yoast is talking about this, a lot of people in WordPress are talking about this idea of having more efficient computational systems that don’t use as much energy. One thing I was thinking about is where are we housing all of the computational power necessary to generate this perfectly adaptive system and how are we going to store what is effectively infinite different configurations of user preference for every individual web user of every individual website?
I just don’t know how we aren’t just taking the internet and 100xing what is already almost an unquantifiable amount of information.
>> AMBER: Well, it’s beyond that. Let’s be real. I think there are serious privacy concerns about saying that every website– How many of us don’t actually want ads that are targeted to our user preferences, and we don’t want Google to understand that we would rather see A instead of B. We’re actively trying to block that. If we don’t even want to see targeted ads that understand what we’re interested in, why do we think we want–
Then, think about this too, because if we’re now talking about building user profiles-
>> STEVE: Discrimination.
>> AMBER: -of people on the internet that stores disability information, I’m sorry, that’s probably in the United States HIPAA-protected information.
>> STEVE: Yes.
>> CHRIS: Yes.
>> AMBER: Healthcare information, do we really want a website to know when I go to it that I’m colorblind or I can’t see it all, or I don’t have the ability to use a mouse or whatever that might be? Maybe, but also maybe not. It would be so much better if the website was built in a way that the user’s personal device can adapt to it, not that the website or some internet thing is adapting to it.
>> STEVE: Even there, it’s not like these personal devices are not phoning home constantly. Your operating system is phoning home all kinds of telemetry data all the time.
>> AMBER: Well, I’m pretty sure we can tell, right? Google can tell what device, what browser someone’s in when it visits a website,-
>> STEVE: Yes, totally.
>> AMBER: -all that kind of stuff is available.
>> STEVE: As you said, you’re probably right, it probably is HIPAA-protected information. As we’ve learned in these 30 years of the internet, data leaks happen and they seem to happen more and more. That profile, even if that data is anonymized, but it could be attached to an IP address. It could be attached to–The browser can create a pretty specific thumbprint, right?
>> AMBER: Yes, a browser on a device in a physical location.
>> STEVE: Then what if that data is leaked? What if it’s put out and somebody has this ability or disability profile, whatever it’s called, and what if that opens them up to discrimination of some sort?
>> CHRIS: Well, there could even be discrimination inherent in the system, or there could even be ways to mess with people-
>> STEVE: Yes, totally.
>> CHRIS: -where you somehow inject information that messes up their customized user experience by– Can you imagine a situation where just an everyday person has their profile set up and then someone finds some way to inject nine different preferences that they don’t want into their system and it completely jacks up their UI for every website they visit.
>> AMBER: That’s probably the next version of hacking, right?
>> CHRIS: Yes.
>> STEVE: Yes, totally.
>> AMBER: Like, “Hey, I’m a hacker and I’m somehow able to identify all the people that are blind, and so I will figure out a way to redirect them to a different login page for their bank that sounds on a screen reader exactly like the login page for the bank, but because they can’t see, I’m able to trick them and then I can get their login information for their bank.” I don’t know. I just feel like there are nefarious ways this could be used.
>> STEVE: It could probably go the other way. It could be spoofed for somebody that doesn’t have that profile. It opens the door to all kinds of things.
>> AMBER: The other thing I think about too is talking about how long would this even take. Let’s say it is actually possible. I don’t know. I know technology moves faster and faster and faster, but at least right now, if we trust the web a million, 96%, 97% of websites have easily detectable accessibility errors. That’s what things like ChatGPT, open AI; other AI models are being trained on, is inaccessible stuff.
I think the only way we would actually arrive at generative AI being able to build accessible interfaces is if the vast majority of the data that–
>> CHRIS: Yes, the training set.
>> AMBER: Yes, the training set has to be accessible, which means we need to go fix all the websites on the internet or I don’t know, at least 60% of them, right?
>> STEVE: Yes.
>> AMBER: Or 60% of the most visited websites, maybe if that’s right. Who cares about someone’s blog that doesn’t get any traffic?
>> STEVE: That’s a good point.
>> AMBER: Unless we do that, how are we going to get good results?
>> STEVE: Well, if AI needs data to learn off of, and as you said, if the majority of it’s not super great, there’s got to be something or someone in the middle telling the AI, “Hey, we’ve evaluated this one, it’s actually correct.” This is what you should be learning from. To my knowledge, that’s going to have to be a human. That’s a huge problem.
>> CHRIS: I was just going to say too, with this generative UI thing, one of the other selling points that Nielsen brings up is this idea that if someone is blind, for instance, the entire user experience would get significantly truncated or shortened or summarized. He’s actually advocating for this idea that if someone has this kind of disability, they’re not going to get a full experience and all of the information that a sighted user would get.
I fundamentally don’t understand how someone who claims to have had 40 years of experience in UX could actually think that that’s a good idea.
>> STEVE: What does that mean to be blind? Is it 1 in 0, not blind, blind?
>> AMBER: There’s a spectrum.
>> CHRIS: About two.
>> STEVE: It’s a huge spectrum. You could not have peripheral, you could have one eye, it could be degenerative. That too, over time, if you have a profile and you have a degenerative blindness that’s going to progress, how do you modify that profile over time? He even says early on in the article, generative UI says, in theory, you could handcraft an optimized user experience for each major category of disabled users. Okay. Each major category, right? Now, my question would be what percentage of the disabled population does that actually cover because of the spectrum of all these?
Then he goes on to say, “We need an approach that scales and that can support users with a wide range of conditions.” I guess he’s saying you hit the majors and then you start working on the spectrums and the wide ranges and the crossover.
>> AMBER: I think this goes a little bit back to his reason why he was saying accessibility fails on the– that it’s missing some people. For example, even we do this, we do user testing. The most common user testing that we’re doing is with blind screen reader users. We’re not bringing in people who use sip-and-puff device or eye tracking. We could, I have a contact for that. We don’t have customers asking for it. I think that’s maybe something he’s saying.
At the same time, I do think that while I think that is totally valid and that there probably should be more diversity in UX testing, just in general, I think that doesn’t mean accessibility has failed. There are certain standards that you can tell, right? If something’s visible name doesn’t match the actual label, then someone who’s using Dragon Speech might have a hard time going to it because when they say, “Select X, Y, Z,” it can’t select X, Y, Z because it doesn’t know that that’s what it is. I think there are ways that you can test for that just by reviewing the HTML. I don’t know that it’s totally failed or that the standards should be abandoned just because of that.
>> CHRIS: It’s an overly reductive view, right? The two choices it seems like he’s presenting are either you hire this gigantic expensive team to test comprehensively for every conceivable level of ability or disability or all you’re doing is trying to pass a “simplistic checklist” which I’m guessing is a dig at WCAG. I don’t know who would call that a simplistic checklist.
>> AMBER: Well, maybe that’s what that was, or maybe that’s more of a dig at all those blog posts that are–
>> CHRIS: Maybe I’m stretching.
>> AMBER: I didn’t read it that way. I’ve seen other people respond to this who read it that way. I think of more is these blog posts that people put out. Half the time they’re not even accessibility people, they’re just trying to game SEO, that are like, “10 things you can do to improve the accessibility on your website right now.” Right?
>> CHRIS: Yes.
>> AMBER: Here’s the thing, does that make your website accessible if you only do those 10 things or that small little checklist? No. Does it make it more accessible than it was before? Yes. There is value in some of those checklists and things to get people started. I have a hard time envisioning how we remove humans from this. I don’t love the idea that– Where he’s saying it would build an interface that thinks about you and remove stuff. Why don’t you give the user the choice? That’s why we have headings. If they want to skip a section about something, they can choose to do it, but they still see the heading in their heading list and know it’s there.
>> STEVE: At a basic level, the simplest way to boil this down is that the screen reader is being moved before the UI creation, the graphic user interface creation, and it’s being rebranded as AI so that the screen reader will decide if it needs to present the spoken word to the end user, or if it needs to present a graphic user interface to the end user. That’s a very black/white decision, right?
>> CHRIS: Yes, it’s an incredibly divergent path.
>> STEVE: Yes. I would argue that maybe a lot of blind users– when I say blind users, I mean blind users of a huge spectrum, some may want both still. I don’t know. It’s interesting. I don’t like the all-or-nothing approach.
>> AMBER: I don’t think it has to be so expensive, like the way he’s–
>> CHRIS: No.
>> AMBER: I think we have figured out ways, spreading costs out over time, being more intentional in the elements that you choose. Like, if you don’t put a carousel on your website at all, there’s a lot less accessibility work that has to go into that. There are ways that people can build nice-looking, yet simple websites that are very accessible that do not require huge amounts of effort. Especially if, and this is my plug and dig at all the WordPress plugin developers out there, especially if you put the effort into making your plugin accessible, because then everyone who uses it will just have accessible web from that component of their website by default.
>> STEVE: Yes, or the opposite. You can make everybody’s website inaccessible.
>> AMBER: But don’t do that, please.
>> STEVE: That’s an interesting way of doing it. If you’re a company hiring an agency to make you a website– like with our company, accessibility is first for us. It’s a non-negotiable for us because it’s in our name, it’s what we do. At first, when we had to start making websites as accessible as possible, there’s a lot of work that’s gone into building up our systems and our code base and our code starters to make them accessible. Now, as we move on, and we have to learn these things, we’re building on it. It’s becoming easier for us.
It’s just like what responsive design was. At first, it was like, “This is hard. I have to do all this extra work and the client’s not really paying me any extra for it.” Over time it just, it’s become just natural, we just do it.
>> AMBER: You’re not building a new tab block every time you have to make a tab block or a new accordion every time you make an accordion. You put the effort into making it once and then you just reuse it or restyle it.
>> STEVE: Well, too, if we do have to make it again, we have the knowledge in our brains, as developers.
>> AMBER: To just type it out in the right way.
>> STEVE: To do it the right way. Yes.
>> AMBER: That’s why we talk about shifting left being so important. You guys may or may not have had a chance to see this, but there are a couple of studies that I found. I gave a five-minute-ish lightning talk in the middle of the keynote for WP Engine’s DE{CODE} Conference, which I think the recording for that, by the time this episode comes out, should be on YouTube for people who didn’t attend WP Engine’s DE{CODE} Conference. I was doing a bunch of research into accessibility and AI. I found some different studies. One’s from Ohio State University and the other one is from the University of Washington, both of which we’ll put links in the show notes. I thought it would be interesting to chat about.
The Ohio State one, this actually was January of this year. They were talking about, basically, they’re trying to create an agent. They’re calling it Mind2Web is what they’re calling it. They’re embracing the dynamic nature of real-world websites and basically using an agent to handle tasks for people. Some of the tasks that they have been trying to train the AI to do is things like booking one-way and round-trip international flights, following celebrity accounts on Twitter, browsing comedy films from 1992 to 2017 streaming on Netflix, scheduling car knowledge tests at the DMV. I don’t know if that’s like a driver’s license exam at the DMV. Really complex multi-step tasks with the idea that the AI, because right now AI can do simplistic, like one thing, but they want to create an AI model that people with disabilities could go out and do.
They said, for example, the booking international flights would take 14 actions. If you had your personal AI agent and you were just like, “Hey, I need to go to WordCamp US in Italy in June,” it would get the dates. It would check all the airlines. It would probably say back to you, “Okay, what do you want to pay? Here are the fees.” Then you pick one and then maybe the times or whatever, and then it would fill in all the forms for you, submit your payment information, and be like, “Great, you’re all booked. You should have a confirmation in your email,” which is kind of interesting as far as– This is almost a better, smarter virtual assistant, I guess. I don’t know.
>> STEVE: Yes, “If this, then this,” right? These are voice commands, right?
>> AMBER: I think so.
>> STEVE: You give it a voice command.
>> AMBER: They’re trying to fine-tune it with both open and closed source, large language models, like Flan-T5 and GPT4, but they’re saying there’s definitely some potential, but they’re not anywhere close to it. I thought that was an interesting thing that someone was working on.
The University of Washington that I came across is actually less enthusiastic on how generative AI can help people with disabilities. They did several different research on different vignettes or types of people with types of disabilities doing things. For example, they had someone they called Mia, who had intermittent brain fog, use ChatGPT.com, which summarizes PDFs, to put in PDFs and then summarize them, explain them to them if they were complex. They said it often gave completely incorrect answers, which doesn’t surprise me at all, because I feel like ChatGPT makes stuff up all the time.
>> STEVE: Yes, yes, yes.
>> AMBER: Or they were talking about AI models making mistakes. They had things where, let’s see, one author who was autistic was trying to use AI to help write Slack messages at work but spending too much time troubling over the wording. The peers, their coworkers said the messages were robotic, but the author did think that it helped him in writing Slack messages. It was kind of a mixed bag there. I don’t know.
Some of the biggest things was that they found a lot of ableist assumptions in what they were going back. They said, let’s see, so the ChatGPT would give people summaries that didn’t make sense, or they were just using language that kind of was negative for people with disabilities. That probably goes back to what we were talking about before, about being trained off of average websites.
>> STEVE: Average content, yes.
>> AMBER: The other article I found was less enthusiastic, it was the University of Washington, and they were doing multiple tests with ChatGPT and Midjourney and different tools that use the GPT-4 model to see how it could potentially help people with disabilities. There was one they were talking about where they had someone use chatpdf.com, which summarizes PDFs. They would upload the PDFs and then it would give them a summary. This was someone who had, they said, intermittent brain fog so maybe needed some help with understanding complex documents. The tool occasionally gave completely inaccurate answers. I think we’ve all been there, right?
>> STEVE: Yes.
>> AMBER: Had ChatGPT tell us something super weird?
>> CHRIS: Yes.
>> AMBER: That’s not surprising. They said that in one case, the tool was not just inaccurate, but also ableist. They said that it changed the paper’s argument to sound like researchers should talk to caregivers instead of chronically ill people. The paper was saying if you’re a doctor, you should still talk to the chronically ill person about their health care, not their caregiver, but it changed it to say, no, you shouldn’t talk to them, you should talk to their caregiver. That was something that flagged for them as a problem.
>> STEVE: Well, that could be like life-endangering, depending on what they’re talking about.
>> AMBER: Completely dehumanizing.
>> STEVE: Yes.
>> AMBER: Could you imagine if this is something that’s, I don’t know, a doctoral– I hope that doctors actually do their homework, but I don’t know, maybe they’re like any other university student, and they want to get the quick summary version, right? The cliff-nose version. If they’re like training, and they’re like reading it, and it’s like, “Always talk to the caregiver, not the actual person.” Then they started doing that, which is the way it used to be, probably. I don’t know.
They did find some instances where it was maybe helpful. They had a person who was autistic had been using AI to help him write Slack messages. He said that the tool made the author feel more confident in interactions with their co-workers, but the co-workers felt like the messages were robotic. I don’t know. That came from a robot so maybe that’s why. Maybe it is a little helpful, but overall, they said there are just too many problems present right now.
I think, circling back, we have to do the manual work to improve the things that these models are trained on before these models are actually going to be– I don’t know how they’re going to get good without us doing the accessibility work. That article we were talking about; Jakob, maybe it’s great in theory years from now, but how do you get there? You have to do accessibility work.
>> STEVE: Yes. I think this study highlights too that, just like you said before, the AI can’t train off of bad data. This underscores the importance of including users with these abilities in the testing.
>> AMBER: I don’t know. I feel like if we had to recap all of this, how do we feel about– It’s even interesting to think about. We’ve talked a little bit about could we or are there ways we could include AI in Accessibility Checker. Are there ways that we should be including AI more in the custom websites we build or that we think other WordPress people should be doing? What do you guys think? Do either of you have any ideas? Should we be using AI more or encouraging others to?
>> CHRIS: I thought that the idea you had, I think it was during the VIP panel at WordCamp US about maybe leveraging an AI integration to help people write simplified summaries or adjust reading level of content could be interesting. I think it would need to have a human review requirement before they’re allowed to accept it. They would have to literally click an Accept button, be like, “I read this. Looks good.”
>> AMBER: I came back from that panel and I actually tested it a bunch with ChatGPT because I was like, “This would be cool and then we could be all like, our plugin has AI,” because it’s the buzzword. I feel like so many plugins are doing that. I’m like, you have AI for this one tiny thing, just so you can say it. Even when I would say, “Write this in a fifth-grade level,” or, “Write this in a second-grade level,” or, “Write it for a seventh-grader,” it would give me back this text. I know it’s not just me because I’ve seen other people writing about this, that was like college-level language. I don’t understand because I feel like that’s something that should be easy.
>> STEVE: Well, did you prompt it with an actual question page?
>> AMBER: Yes, I would say, “Here’s text that is at a 12th-grade reading level.” I would say, “Summarize this text at eighth-grade reading level.” It would return a summary that was generally accurate for what I gave it. Then I would take that summary and paste it into a WordPress NR simplified summary thing because it runs a Flesch-Kincaid readability test in Accessibility Checker, and it would always come back over 12th grade.
>> STEVE: Well, try prompting it to use the Flesch-Kincaid evaluation.
>> AMBER: Oh, I didn’t specifically say that.
>> STEVE: Give it an evaluator to actually evaluate against and see what it does. It might give you more accurate results.
>> AMBER: Maybe we can still test having that in our plugin. I don’t know.
>> STEVE: I think it would be great, if we can get it to work. I think Chris is right that the AI is, we’re in an AI bubble, everything’s AI. A lot of it, like I said earlier, is a little bit of a rebrand of code that we’ve been doing anyways because it’s such a buzz thing. ChatGPT and GitHub Copilot work really well. We do use AI in our development processes. There’s a human between what’s generated or what’s evaluated and then what actually makes it to production. It’s a great tool, I can’t believe that we’ve coded for 20 years without it, to do a lot of low-level things that are very programmatic. Very programmatic things, AI is very good at.
I’ve gotten it to do weird stuff. I’ve gotten it to do what they call hallucinations and contradict itself and stuff. I don’t think it’s a replacement for humans. I think it’s an augmentation for what we’re already doing.
>> CHRIS: It’s a replacement for having a really entry-level intern helping you out, kind of.
>> STEVE: Yes, totally.
>> AMBER: Not an accessibility expert?
>> CHRIS: Yes. Think of them like, “What’s a task I could give someone who’s just out of high school?” That’s pretty much what you can ask an AI to do. Just at scale.
>> AMBER: Really fast on many pages.
>> CHRIS: Yes, really fast on many pages. It’s like an army of 100 high schoolers if you’re having it parse a bunch of data. I use it to help me summarize things. I use it to give me a starting point from which to write or to help me come up with structures of things. I used it recently and it was actually really good at this, I used it to construct a bunch of different custom Google search queries that I wanted to set up for finding various things that are really specific. It did a pretty good job at that. [inaudible] [crosstalk].
>> AMBER: Wait, but the things you were looking for, I went and I found multiple examples that you didn’t find, and I typed my own search query.
>> STEVE: Wait, hold on. Are you saying you’re better than AI, Amber?
>> AMBER: Yes.
>> STEVE: Ooh, I just branded it; Amber AI. There we go.
>> AMBER: I am not faster than AI. I am really slow. I write blog posts and they take me many, many, many hours to generate the number of words that AI will spit out in a handful of minutes, but I think my words might be better or my paragraphs might be better.
>> STEVE: They’re done in your voice. AI is very like–
>> AMBER: I will admit I don’t pay for it so when I’ve experimented, I’ve only used the free version of ChatGPT, and so it’s not as trained probably as if I paid for it and it knew me.
>> STEVE: Yes. GPT-4 is much better. I think, overall, I think all these articles that we’ve talked about, I think this is good that people are thinking. I think it’s a net positive that they’re thinking about how to utilize these tools to make things more accessible. I’m a little apprehensive about taking the approach of, let’s just say all or none. Throw all the WebAIM stuff out that’s been out forever and just adopt an approach that, in his own article, says if they took an aggressive approach, it’s five years off.
I think AI can be a net positive in the accessibility realm. I think these things all go together, work together to make a more positive experience.
>> AMBER: I think there’s a reason why all these other companies now are adding human services. Like they’re not just SaaSs anymore. A lot of them like AccessiBe and UserWay. UserWay got bought. We talked about that. It got bought by a service company but they’re not getting rid of their service component.
>> STEVE: What exactly are they adding? I’m not aware of this. I think AccessiBe has–
>> CHRIS: AccessiBe has a whole suite of tools that are more developer-focused and design-focused-
>> AMBER: Well, not just tools. I’m talking about you can go there–
>> CHRIS: -and they have consultants too.
>> AMBER: Yes. You can hire consultants because they got so much flack about only doing AI accessibility. I don’t know “how AI” it is. Sometimes I’m not sure it’s actually that smart. It’s not learning. I guess there’s a difference too. AI, I think, creates–
>> STEVE: Has to learn.
>> AMBER: It has a component of machine learning with it where it absorbs things and changes.
>> STEVE: Or iterates.
>> AMBER: Yes. I don’t really know if the overlays do that. I think they just have rules that they follow. I don’t know. I haven’t actually looked at one, so I could be totally wrong. They’re doing that. I think that right there is an indicator. If these companies that are literally based on only technology are adding the human component back in because they’re realizing they can’t do it yet, I think that is a sign to us that AI is not there yet.
>> STEVE: Yes. Really, philosophically, AI as a whole, what are we trying to achieve with all this? Are we trying to replace ourselves with everything, or are we just trying to–
>> AMBER: To be fair, even a little bit with Accessibility Checker, this is a thing. I don’t think that that article saying that accessibility is really expensive is necessarily always wrong on that front, right?
>> STEVE: Right.
>> AMBER: We’ve talked about how expensive the SaaSs are, and that’s why we built an Accessibility Checker because nobody wants to know what we were paying and we’re probably not allowed to stay what we were paying Monsito because I’m pretty sure we signed things, so we can’t say that on a podcast, but it was a lot of money. We built Accessibility Checker so we wouldn’t have to do that. I do think our goal, even with that tool, I even recognize that we’re an agency, we’re a team of multiple people. We don’t charge $15 an hour or whatever. There are probably even some people who can’t afford to pay freelancers outside of the US who might charge $25, $15 an hour to test things, they don’t even have the budget for that.
I do think that there is a need for these tools to continue to adapt and to help make it easier to replace us to a degree. Maybe eventually if we get to a point where we have full coverage and we can say Accessibility Checker is smart enough that it can test all the things a human needs to test, that would be cool. I don’t think I would object to that if it was true. I just don’t know how long that could possibly be. Even Deque with their acts and their guided testing would still require a human interface with it. They’ve still said that they catch 50% of problems with their automated tool.
>> STEVE: Yes.
>> CHRIS: It’s a long way off. Even then, it’s replacing a human in a mundane and repetitive activity to an extent. I don’t mean that as an insult to the practice of accessibility. I just mean that I am sure that the accessibility auditors who are for the 20,000th time saying that your image needs to have alt text or you need to have an H2 after an H1 might rather be telling someone something a little bit more thoughtful and deep about accessibility and maybe taking accessibility to the next level beyond a standard.
I feel like the humans in the accessibility space probably don’t need to be super afraid of AI because I think that we’re just going to let the AI take care of the baseline-level stuff that we’ve been all dealing with down in the trenches. We’re going to get to start to think about better accessibility use cases and better ways to adapt technology to people’s needs. This Nielsen guy, I think, is three steps ahead where he’s thinking about how we can cut humans even out of that.
>> AMBER: Well, hey, he got all the clicks.
>> CHRIS: He did. He got all the clicks and all the rebuttals and everybody’s talking about him, so mission accomplished, man.
>> AMBER: He’s winning the ICO.
>> CHRIS: Bravo.
>> STEVE: It’s bigger than writing an article. If he believes these words, then it’s time to get to work, you’ve got five years.
>> CHRIS: I said this privately to all of us, but the article kind of read like he was about to start selling something, to me. With how many holes he was poking in accessibility. Maybe we’re a few months away from some big announcement from his consultancy or whatever he does.
>> STEVE: Or investors or something.
>> CHRIS: Or investors or something.
>> AMBER: Well, we will all have to stay tuned. I feel like this is probably the first of many AI episodes that we’ll have on this podcast over the years because it is constantly changing. I think bottom line, would you say AI helps accessibility?
>> STEVE: From the developer standpoint in our workflow, it has definitely helped augment our development, yes.
>> CHRIS: I’ll be pretty agnostic and just say that I think like any other tool, it can be used or misused. There are, I think, people who are trying to use AI to address accessibility and it’s having a net negative outcome. I think there are others who are trying to use it as having a net positive outcome.
>> AMBER: Well, it’s been fun chatting with you both. I enjoyed my strawberry lemon Poppi.
>> CHRIS: My glass is empty.
>> STEVE: This is going to be a bad joke, but if it was real bad, we’d have added another O to the name.
[laughter]
>> AMBER: But it wasn’t.
>> STEVE: It wasn’t?
>> AMBER: You’re not going to do that, right?
>> CHRIS: Would the illustration be a poop emoji on the can, too, if that was a–
>> STEVE: Exactly. That’s genius. I like that.
>> AMBER: You know that just like there is Jones turkey soda, I bet if we made that soda, somebody would buy it.
>> STEVE: Yes. Just on branding alone, yes.
>> CHRIS: Somebody probably would. All right.
>> AMBER: Well, we will be back in two weeks with another episode.
[music]
>> AMBER: Talk to you then.
>> CHRIS: Bye. Bye.
>> STEVE: Bye.
>> CHRIS: Thanks for listening to Accessibility Craft. If you enjoyed this episode, please subscribe in your podcast app to get notified when future episodes release. You can find Accessibility Craft on Apple Podcasts, Google Podcasts, Spotify, and more. If building accessibility awareness is important to you, please consider rating Accessibility Craft five stars on Apple Podcasts. Accessibility Craft is produced by Equalized Digital and hosted by Amber Hinds, Chris Hinds, and Steve Jones. Steve Jones composed our theme music. Learn how we help make thousands of WordPress websites more accessible at equalizedigital.com
[] [END OF AUDIO]