(DESCRIPTION)
[00:00:00.00] Allyant Webinar -Internal Invite.

(SPEECH)
[00:00:02.46] Awesome. OK. Well, thank you, everybody. You should be seeing the Allyant logo plastered up on your screen there with “Simple, Seamless, Accessibility.” And I’d like to thank you for joining us today.

(DESCRIPTION)
[00:00:17.52] Andy Baum.

(SPEECH)
[00:00:18.30] What we are going to be presenting on and talking about is “Navigating the World as a Person with Visual Impairment.”

(DESCRIPTION)
[00:00:25.85] Live Demo.

(SPEECH)
[00:00:27.63] And it will be a primarily live demonstration after I do introduction here. And so what we’re going to be talking about, really, is how people who are blind experience health insurance content.

[00:00:42.28] And rest assured I won’t be picking– we won’t be picking on any of your Blue plans. We’ve kind of cherry-picked some other things to talk about. But I want to briefly highlight presenters here.

[00:00:57.46] That’s me on the left. I’m a senior account executive here at Allyant for document accessibility, and I’m also the national account manager for the Blues. And on the right there is Aaron Page. He’s our director of accessibility auditing. He’s going to be the guy who does the live demo. And I’ll tell you more about Aaron briefly when I go to transition over to his demonstration.

[00:01:23.89] A brief agenda– we’re just going to briefly talk about, why Allyant? Who are we, what do we do, and why is that relevant to you guys in Blue Cross Blue Shield plans? Aaron’s going to do his live demonstration of health insurance content, and that will really kind of drill into web pages as well as documents posted to the web. And we’ll show some examples of good, bad, and somewhere in between. And then we’ll open it up for question and answer.

[00:01:52.86] And as Briana mentioned, please add any questions that you have into the chat. Seeing as this is recorded, we won’t mention any names during the question readback. But we’ll have your contact information to follow up with directly after the webinar.

(DESCRIPTION)
[00:02:08.97] Why Allyant?

(SPEECH)
[00:02:09.83] With that, who are we at Allyant? As I mentioned, we’re about seamless and simple accessibility to help ensure equitable access to information and communications for your customers. We work with over 1,000 customers ourselves, including many Fortune 500 companies and a number of your sister Blues.

[00:02:30.38] Our technology is recognized by the US Department of Health and Human Services, as well as CMS, and the World Wide Web Consortium, which actually developed the Web Content for Accessibility Guidelines, or WCAG for short, standard. And so that’s really important when, as a company, what you specialize in is digital and document accessibility.

[00:02:54.56] And we really are the most comprehensive technology-enabled accessibility solutions provider out there. And we hope to be pretty much a one-stop shop for a lot of your document and digital accessibility needs. We are a national supplier of the Blue Cross Blue Shield Association, as Briana alluded to. And all the Blues are eligible, as well as most of your subsidiaries as well. If there are any questions, you can certainly reach out to Briana or myself, and we’ll be happy to help you out and let you know what can be done.

(DESCRIPTION)
[00:03:29.54] We are here for you.

(SPEECH)
[00:03:31.13] Also wanted to let you know, with AEP fast approaching here, how can we help? What is it exactly that we do? We’re first and foremost a software company. So if you’re looking to develop any type of in-house accessibility resources, or if you’re interested in postcompositional automation for those legacy applications that churn out output, that’s something that we can do.

[00:03:54.21] We can leverage automation to do accessible treatment, if you will, on documents that are coming out of those legacy apps. We also function as a service bureau. So you can let Allyant do the heavy lifting and the remediation work for you with guaranteed compliance. That’s especially important as we get ready to embark on the crazy time that is annual enrollment. And most of our customers, at that point in time, don’t want to be bothered with doing work internally using our software and prefer instead to have us do it for them.

[00:04:30.69] We also focus on web and digital auditing. And that’s what Aaron’s going to be kind of highlighting here today and showing you. And that deals with websites, member portals, mobile applications, for both new and existing properties. And basically, if it’s internet and some way that you are presenting yourself to the world digitally, we can help you with that and consult with you in terms of helping you put your best digital foot forward.

[00:05:07.11] And we also focus on alternative formats of documents. So if you have customers who are looking for Braille, reflowed large print, et cetera, we do that work. It’s HIPAA-compliant. The production facility in upstate New York is offline and not connected to the internet and so forth so that we’re able to maintain your members’ privacy and security. And we deliver those documents directly to the member on your behalf.

[00:05:39.84] And finally, where clients are looking for accessible PDF, but where protected data– remediation is involved, where there’s PHI or PII, we can do those in a HIPAA-compliant fashion as well, things like explanations of benefits or premium payment statements, et cetera, you name it. So that’s really kind of a 10,000-foot view of the services that we offer, but I’m happy to answer any more specific questions. Just engage me via email after the fact. I’ll be happy to exchange information and see where we might be able to help you out.

[00:06:21.38] Getting engaged with us is super easy. We just need to discuss whatever the desired services and scope are and then get a copy of the national marketing business agreement and the eligible purchaser agreement, which Briana can help with as well. And then we work on budgeting and quoting using your Best [? in ?] Blue pricing, get that EPA and/or statement of work executed, and we implement.

[00:06:45.51] I have seen this take as little as a week just before open enrollment a few years ago. So depending on the motivation of your particular plan, we can get up and running with you very quickly. With that, I would like to give Aaron Page a little more detailed of an introduction here and then kick it over to him so that he can take up the majority of this time with his demonstration.

[00:07:14.19] Aaron currently serves as our director of accessibility at Allyant. Aaron is blind. He was born with congenital glaucoma, lost his remaining useful vision back in 2009. And he’s been a daily user of screen reading software ever since.

[00:07:30.25] So Aaron was trained in the use of screen reading software and other assistive technologies at Lion’s World Services for the Blind, after which he went to the University of Montana and earned a BS in business administration with a major in management information systems. And while attending the university there, Aaron worked with electronic and information technology accessibility and served as a subject matter expert for the campus accessibility task force, as well as a student representative on their ADA committee.

[00:08:04.12] In February 2018, he joined one of the three companies that merged together here to form Allyant as an accessibility engineer and screen reader auditor. So primarily then, what he did was auditing digital content to ensure conformance with Web Content Accessibility Guidelines and providing client support throughout the remediation process, as well as providing workshops and trainings for clients upon request.

[00:08:31.80] And in 2022, he was promoted to director of accessibility, at which point we’ve expanded his duties to include development and refinement of Allyant’s digital accessibility audit processes, also community involvement via presentations and webinars, like the one he’s speaking on today here with us, and also managing Allyant’s own internal accessibility initiatives.

[00:08:55.57] So with that, I would like to stop my share. And I will invite Aaron to take over driving and wow us with the experience that he has as a blind person using assistive technology to engage with health care and health insurance information. It’s a pretty pun-intended eye-opening experience. So, Aaron, all yours.

[00:09:23.86] Thank you so much. I really appreciate it. Give me one moment here. I’m going to go ahead and turn on screen sharing here.

[00:09:29.82] So we are doing a live tech demo. I always hold my breath a little bit when I do this part and turn the sharing on. Let’s see if everything works like it should. But if everything goes like it should, you will eventually see my screen and hear my screen reader.

[00:09:49.27] You’re sharing your screen.

[00:09:50.65] OK.

[00:09:51.34] Health insurance plans, vertical bar-Google–

[00:09:52.73] So if everything went like it should, you should be hearing my screen reader and seeing the Aetna site’s home page.

[00:09:58.78] Health insurance plans, vertical bar.

[00:09:59.89] We are.

[00:10:00.49] Anybody confirm that? Perfect. Excellent.

[00:10:02.71] All right. Well, let’s dive in then. So hi, everybody. My name is Aaron Page, and it’s really great to get to talk with you today.

[00:10:11.39] I always like to start out these screen reader demonstrations by just setting the stage by explaining what screen reading software is. There is often a lot of confusion between what the difference is between screen reading software and text-to-speech. So screen reading software is software that is either installed on top of or part of the operating system that allows a user to completely operate and interact with a device using nothing but speech, or using just speech.

[00:10:44.44] So text-to-speech software, on the other hand, is designed for users who have print disabilities other than blindness to be able to read text that is being displayed on the screen. So the difference there is that screen reading software allows you to actually operate a device just using speech. Text-to-speech software assumes that you can see.

[00:11:07.79] It assumes that you are able to operate the device visually, using a mouse and clicking and dragging and doing all the things that a typical nondisabled or sighted user does. It is just intended to help you read the text that is presented to you on the screen. It doesn’t actually help you with interacting with the device itself overall.

[00:11:29.33] A big reason why I think this is always really important to highlight, especially in the area where I work, primarily in the digital space, is because there’s a lot of push around what are called overlay tools to embed functionality like text-to-speech onto your individual website. And so the problem is that let’s say you do that. Let’s say you sign up for some widget or extension or whatever it might be, install it on your website, and now your website has, quote unquote, “text-to-speech” built into it.

[00:11:59.93] Well, how did the user get to your website in the first place? How did they open up Chrome? How did they log into Microsoft Windows? All of those things are interactions with the device itself that you have to be able to do using screen reading software independently.

[00:12:15.89] So putting functionality like text-to-speech at the individual site level doesn’t really help benefit users. If somebody requires text-to-speech or high color contrast or anything like that, any type of assistive technology functionality, they are going to need it for everything, right?

[00:12:34.58] They’re going to need it for Windows. They’re going to need it for Outlook. They’re going to need it for Microsoft Word or Google Docs, whatever it might be. So the user is going to have that functionality already enabled at the operating system level. They will either be using functionality that’s built into the operating system, or they’re going to have an actual assistive technology application that’s installed on it.

[00:12:56.46] And the idea behind the Web Content Accessibility Guidelines, which is the standard that you use for determining the accessibility of websites, mobile applications, PDFs, it is not for you to provide assistive technology on your site. It is to make sure that your site or your document works with the assistive technology that the user already has.

[00:13:16.64] So I myself am blind. I use screen reading software. The screen reading software that you’re going to hear is called Job Access With Speech. It really is kind of the industry standard when it comes to screen reading software. It’s one of the oldest ones that is out there. It is the most feature-rich. It also is the most expensive.

[00:13:33.29] Previously, it used to cost $1,000 just to get a license for JAWS and get in the door and then a couple hundred dollars every few years to keep your software agreement up to date. They recently switched over to a subscription model. So now you can get started with JAWS for about $100 a year.

[00:13:50.09] But as a commercial screen reader and one of the oldest ones that’s been around, it’s definitely the most feature-rich. It also is the most widely used. There is an organization called WebAIM. They conduct a survey of screen reader users every couple of years, year and a half to two years.

[00:14:05.16] And one of the questions that they ask on there is, what is your primary screen reader? And JAWS always has come back as the number one most widely used screen reading software that is out there. I think a big part of that is because traditionally, blindness schools, blindness training programs, schools for the deaf and the blind, vocational rehabilitation, all these kind of government entities that train blind people have, for a very long time, pushed Windows devices with JAWS.

[00:14:31.25] So many folks, myself included, when they were trained to use it, they were trained using JAWS.

[00:14:38.81] On the Windows side of the house, there is also a free and open-source screen reader out there called Nonvisual Desktop Access, NVDA for short. NVDA is certainly not as feature-rich as JAWS. However, if all you’re looking for is a screen reader to allow you to operate your computer, read your email, go to websites, that type of thing, NVDA really does have kind of everything that you need there. And it is now the second-most widely used screen reader screen reader, according to that WebAIM survey.

[00:15:06.17] Between JAWS and NVDA, they account for more than 80% of desktop respondents to that survey. The rest are obviously split between other Windows-based screen readers or users of macOS. On macOS, there is a screen reader built in called VoiceOver. It has usually– less than 10% of respondents in that WebAIM survey have identified as using macOS VoiceOver as their primary screen reader.

[00:15:31.46] So at Allyant, our digital division, our focus, when it comes to auditing websites on desktop, is focused on NVDA and JAWS, since that is the most– that covers the kind of widest range of users. If you ever decide to do any kind of accessibility testing of your own, you want to download a screen reader and give it a shot, I would highly recommend Nonvisual Desktop Access. It’s what we recommend to all of our clients’ devs who want to do any kind of quality assurance testing on their own.

[00:15:58.05] Warning about VoiceOver on macOS– it is an extremely powerful screen reader because, unlike JAWS and NVDA, VoiceOver is built into macOS. It’s part of the operating system. So it makes it a very powerful tool.

[00:16:10.17] However, it is very complicated to learn to use. The learning curve is steep compared to JAWS or NVDA. So it’s not one that we usually will recommend if you want to kind of learn to use a screen reader yourself and do your own testing.

[00:16:22.05] We generally don’t recommend it on VoiceOver on macOS, just because the experience is going to be very unique and specific to those macOS users. And it’s going to take you a lot longer to get on board with it compared to JAWS and NVDA.

[00:16:35.55] On the mobile side of things, there are screen readers built into your primary mobile devices as well, Android and iOS. On iOS, there’s a screen reader also called VoiceOver, so same name is on macOS. Apple really set the bar with the iPhone 3GS when they rolled out a device with a screen reader built into it.

[00:16:52.56] Prior to that, blind users like myself who wanted an accessible mobile device usually had to go with either Symbian or a Windows mobile device and then buy a third-party screen reader that you installed onto the phone. And it usually did not work very well.

[00:17:07.65] Apple really kind of revolutionized things for blind users like myself when they rolled out the 3GS with VoiceOver built into it. Google followed suit eventually, with Android rolling out a screen reader called TalkBack. VoiceOver kind of dominates the market. More than 70% of respondents to that WebAIM survey, if I remember the number correctly, cite that they use VoiceOver on iOS as their primary mobile screen reader.

[00:17:32.22] TalkBack is catching up, though, because it really, in the last few years, has really improved its functionality. It’s close– not quite, in my opinion, but, many would argue, close to or better than VoiceOver. So it’s definitely building its user base there among the blind community.

[00:17:48.34] So those are just the various kind of screen readers that are out there. These are all the most commonly used screen readers. There are certainly others that are out there.

[00:17:55.05] Windows has one built into it called Narrator. There’s one called ZoomText. There is System Access, lots of other screen readers out there. But these ones that I talked about– NVDA, JAWS, TalkBack, and VoiceOver on both iOS and macOS, these screen readers really encompass the vast majority of users of assistive technologies, or screen reading software specifically.

[00:18:15.85] So how does screen reading software work? Let me start this out by just letting you listen to what my screen reading software sounds like. We’re going to use the Aetna site here to demonstrate this in preparation for this presentation.

[00:18:30.82] One of the things that I did was I went through about half a dozen different health insurance company websites to kind of see whose website I could find that had some good accessibility issues that I could demonstrate, because I think it’s very helpful to be able to see what it’s like when you’re interacting with content that works, but also when you encounter content that doesn’t. And so Aetna here, we’re going to pick on them a little bit today. I found some good issues on their site that’ll make for a good demonstration, I think, and be eye-opening.

[00:18:55.45] So I’m just going to start at the top and let my screen reader read down a little bit so you can hear what my screen reader sounds like when I’m using it on a regular basis.

[00:19:02.76] Health insurance plans, vertical bar, Aetna. Same page link, skip to main content. Link graphic Aetna-health insurance plans and dental coverage graphic logo. Menu, quick link, navigation region, list of two items. Link, Contact Us.

[00:19:08.34] Link, Castilian Spanish/Español. List, [INAUDIBLE] search, edit combo collapsed. Explore Aetna site’s button menu, menu, quick link, navigation region and desktop menu navigation region, Explore Plans, menu collapsed, Member Support, menu collapsed, Find a Doctor, menu, Find a Medication menu. Link, Member Login. Desktop menu navigation region– region.

[00:19:16.80] So that was everything at this top of the page– the Skip to Main Content link, the search box, all the way down through the site’s main navigation. Chances are you probably weren’t able to understand much of what the screen reader said. That is certainly to be expected.

[00:19:30.82] But I wanted to show you what it sounds like when I’m using my screen reader on a daily basis, whether I’m reading emails, responding to Slack messages, doing accessibility auditing. This is what the screen reader sounds like for me when I’m using it.

[00:19:43.89] A typical user of screen reading software runs their software anywhere between 300 and 500 words per minute. But when you first install a screen reader or you first turn one on, they default to a very slow speech rate. And the reason for that is it does take users time to get used to operating it and listening to it.

[00:20:01.37] But over time, as you get more comfortable with it, you’ll get more used to the voice. You’ll get your hearing kind of trained for it. And you gradually will increase your screen reader speech rate until, again, a typical user of screen reading software is running anywhere between 300 and 500 words per minute.

[00:20:18.39] When you think about the amount of content that’s present on a typical website or in a typical document, running it at that super slow speech rate that it defaults to would take you a very long time to go through something like a traditional flow of going to an e-commerce website and picking a product and adding it to your cart, completing the checkout. If you were running it at the slow default speech rate, it would take you a very long time to complete that process. And so from an efficiency standpoint, you really do have to learn to be able to use the screen reading software at a faster speed.

[00:20:50.84] So for the rest of this demonstration, I’m going to go ahead and I’m going to slow the screen reader speech rate down so you’re to be able to understand and hopefully follow along with what it’s saying a little bit better here.

[00:20:58.97] Desktop, desktop, desk– desk– desk– desktop menu navi– desktop menu nav– health insurance plans, vertical bar, Aetna.

[00:21:05.93] OK. So now my screen reader speech has been turned down. So what I’m going to do here is I’m just going to start at the top, and I’m just going to keep pressing the down arrow key. And every time I press the down arrow key, you’re going to hear my screen reader go from just one element to the next to the next to the next.

[00:21:22.43] One of the big problems that developers have when they download a screen reader and they want to use it themselves for testing is that they treat screen reading software just like keyboard-only navigation. So they install the screen reader, they turn it on, they open the web page, and then they start using the Tab key to move around.

[00:21:40.85] The problem with that is that the Tab key only sets focus to things like links and buttons and text boxes, things that are actionable and can receive that focus. Things that are not actionable, like just paragraph text or nonlinked images or heading text, things like that, you’ll never hear it read out if all you’re doing is using the Tab key.

[00:22:02.21] So if there’s one big takeaway for any of you on this call here today, if you decide to test things out yourself using a screen reader, on Windows, use the up and down arrow keys. That will just take you forward and backward from one element to the next to the next. Don’t use the Tab key.

[00:22:20.45] One last thing before I dive into that demo is as I do this, think about how this is being presented to the screen reading software. So a sighted user looking at a web page, you look at it, and you perceive it in two dimensions. Content may be– for example, the search box might be in the upper right corner, and the site logo might be in the upper left corner. And the footer links might be towards the bottom of the page and center-aligned there.

[00:22:44.81] So you see it in these two dimensions– up, down, left, right. For a screen reader user, that doesn’t exist, except for when you’re talking about tables and grids. As a screen reader user, the page itself or the document is a one-dimensional flow of text that I can move forward and backward through. And that moving forward and backward is what I’m going to do using the up and down arrow keys here.

[00:23:09.81] So the order in which it will be read out is going to align with the order in which it’s presented in the actual HTML code. So how it looks visually does not matter, actually. It is entirely possible to have something look like it is at the top of the page and have it read as if it’s at the end of the page. It all comes down to how is it ordered in the code, and then as you move forward and backward through that kind of one-dimensional flow of text.

[00:23:35.72] So I’m just going to hit the down arrow key, and you’ll hear what I mean.

[00:23:38.45] Same page link, skip to main content.

[00:23:40.73] So hit the down arrow key one time, and it says, “Link, skip to main content.” So here it tells me that, obviously, that it is a link, and it tells me what the purpose of it is. And you might have noticed it actually said “same page link.”

[00:23:53.12] Same page link, skip to main content.

[00:23:55.31] So what it communicates to me as a user there is that this “Skip to Main Content” link is going to move my focus somewhere on this current page, the same page that I’m already on. Rather than toggling that, I’m just going to keep hitting the down arrow key to move on to the next.

[00:24:09.08] Link, graphic, Aetna-health insurance plans and dental coverage, graphic logo.

[00:24:13.28] So here is the Aetna logo located up there in the site header. So you toggle this, and it would take you back to the site home page. Continuing on–

[00:24:20.48] Menu, quick links, navigation region.

[00:24:22.55] “Menu, quick links, navigation region.” So there’s a bit to chew on here. Navigation region– what this is is what is called a landmark. Different screen reading software calls it in different ways, but JAWS refers to it as “region.” The technical term, according to the Accessibility Guidelines, is a “landmark.”

[00:24:39.98] But landmarks usually are created on websites just by having good HTML. If you used a header tag to wrap your header and a main tag to wrap your main and a footer tag to wrap your footer, those tags create the landmarks for screen reader users. So here you have the site navigation is contained within the nav element. So the screen reading software is telling me, this is the start of the site’s navigation. And they even gave it an extra little label.

[00:25:04.88] Menu, quick links, navigation region.

[00:25:06.83] “Menu, quick links, navigation region.” So they actually labeled this as “menu, quick links.” So it’s got a little extra labeling on there.

[00:25:14.45] Sighted users can see where this starts and where it ends. Visually, you know, OK, here’s the first item in this navigation. Here’s the last item in the navigation.

[00:25:24.30] But for somebody who cannot see, you actually have to define the start and the end of it programmatically so the screen reading software actually tells you here’s where it begins and here’s where it ends. So now it’s telling me this is the start of the site’s kind of main navigation.

[00:25:38.39] So I’m going to go ahead and hit the down arrow again now.

[00:25:41.30] List of two items.

[00:25:42.66] So inside of this nav is a list of two items. So it tells me it’s a list, and it tells me how many items are in the list– very handy feature of list accessibility and using good list markup.

[00:25:52.83] Link, Contact Us.

[00:25:54.09] Contact Us link.

[00:25:55.14] Link, Castilian Spanish, Español.

[00:25:57.30] Español link. And you’ll notice it even said “Castilian Spanish/Español link.” So it actually is telling me the language that this text is using. For those of you web devs out there, that is the lang attribute at work.

[00:26:10.11] Where you actually say in the code, this is the language of this text, screen readers pick up on that, and they inform the user what language that text is using. So you can do that with French and whatever other languages as well. And it would work the same way. It would tell you what language and then read the actual text.

[00:26:27.75] List end.

[00:26:28.60] So the next thing heard is “list end.” So what I mentioned before, defining the start and end, the list is doing that as well. So it told me when it started, how many items were in it. Now here it told me I’ve reached the end of the list. So this list contained those two links that you just heard read out.

[00:26:43.44] Search.

[00:26:44.16] Now we’re getting into the site search.

[00:26:46.20] Edit combo collapsed.

[00:26:47.49] And so now it says “Edit combo collapse.” So this is the text box that I would use to type into the search. And here is actually where I’m going to demonstrate one of the first accessibility issues that I want to demo on this site.

[00:26:59.20] So what I’ve been using right now is what is called Browse Mode, or the virtual cursor. Again, different screen readers have different names for it. But what it allows me to do is read the content of the page. I’m hitting the up and down arrow keys, and I’m moving through the content.

[00:27:13.84] There are also shortcut keys I can use if I wanted to jump to those landmarks or I wanted to jump to the list. I can press the R key to jump to the landmarks. I can press the L key to jump to the list. I have all of these kind of reading and navigation options available to me in this Browse Mode.

[00:27:29.34] But what if I actually want to type into the text box? If I start trying to type, it said “edit combo box.” But if I actually start trying to type–

[00:27:37.32] There are no frames on this page.

[00:27:39.07] So I press the letter M, and it says, “There are no frames on this page.” I press the letter E.

[00:27:42.99] Wrapping to top. Search–

[00:27:44.25] It moves my focus back to the text box. I press D.

[00:27:47.73] Explore Aetna site’s button menu.

[00:27:49.41] And it moves me to an ARIA menu on the page. So typing here is not actually putting text into the text box. And so that is where different modes come into play.

[00:27:59.68] So right now, what you’ve seen is the virtual cursor, the Browse Mode. In order to type into a text field like this, I have to use what is called Forms Mode. I have to actually tell my screen reader, I want to interact with this text box. I want the keys that I press to go to it and not to the screen reader itself.

[00:28:18.19] So to do that, it’s pretty simple. I just press Enter on this text box. The screen reader already knows that I’m there.

[00:28:23.55] Search, edit combo collapse.

[00:28:24.96] “Search, edit combo collapse.” So I’m going to press Enter now.

[00:28:27.90] Enter.

[00:28:28.62] And you heard that little beep sound. That beep sound was the screen reading software informing me that I am now in Forms Mode. So I’m now in this text box. So if I start typing, whatever I type should actually go into the box.

[00:28:42.93] So I’m going to go ahead and do that.

[00:28:44.34] M-E-D-I.

(DESCRIPTION)
[00:28:46.24] Types search.

(SPEECH)
[00:28:47.01] So I typed in “Medi,” like I was going to search for Medicare or Medicaid. So something that you folks who can see the screen might have noticed is search results appeared. But my screen reader didn’t actually tell me anything about that.

[00:29:00.94] So this is the first accessibility issue that I want to highlight for you. So this here is what we would call an autocomplete. You type a search term into the box, or you begin typing a search term, and results appear to help you automatically finish the search term that you’re using.

[00:29:16.23] That is an autocomplete. And autocompletes do have specific accessibility requirements, first and foremost that you have to inform the user when results appear. So as I typed into this box, I actually had no idea that results appeared. The only way I would know is by asking somebody who can actually see it.

[00:29:35.19] Other requirements of an autocomplete like this are that you have to inform the user it’s an autocomplete to begin with. So when I land on this box, it should have actually told me, “Begin typing. Results will appear as you type,” something like that, some instructions to convey to me that this actually is an autocomplete text field.

[00:29:54.10] Then, when the results actually appeared, it should tell me that. And it should tell me how many results there are. So if there’s 5 or 7 or 10, it should tell me that. And then it should tell me, how do I get to them?

[00:30:06.11] So as a screen reader user right now, I don’t even know that results are there. But even if I did, how do I get to them there? It didn’t communicate to me how do we even access the search results. So there’s another requirement. You have to inform users that it is an autocomplete. You have to inform users how to actually access the results. You have to inform users how many results are there. And you have to make sure that the results can even be reached in the first place.

[00:30:31.52] So one ironic thing here– I’m just going to hit the down arrow key a couple of times.

[00:30:36.50] Results, list, list, with five items. Medical application– to move to an item, press the arrow keys.

[00:30:42.81] So once I press the down arrow keys a couple of times, my screen reader did go into the results. And if I hit the down arrow key again–

[00:30:48.98] Medicare, 2 of 5.

[00:30:50.50] “Medicare, 2 of 5.”

[00:30:51.65] Medical policy, 3 of 5.

[00:30:53.63] “Medical policy, 3 of 5.” The navigation aspect of this actually works. But the problem is that users don’t even know that it’s there.

[00:31:01.32] So think of that. Keep that in mind as we move on and I show you the next issue on here. So the site search– effectively, a user of screen reading software may not even know that these results have actually appeared.

[00:31:15.08] So keeping that in mind–

[00:31:16.70] Escape.

[00:31:17.12] –let’s keep going down the page. We’re going to move past the search. I’m going to check the time.

[00:31:20.51] 12:35 P–

[00:31:21.50] 35. OK. I want to make sure that we leave time for Q&A and to show you the PDFs. So I’m going to move on down now past the search.

[00:31:28.55] Explore Aetna’s menu. Quick links, navigation region end.

[00:31:31.55] “Menu, quick links, navigation region end.” So here it told me this is the end of that first kind of navigation landmark that we talked about. So moving past it–

[00:31:39.36] Desktop menu navigation region.

[00:31:41.19] “Desktop menu navigation region.” So here is another site navigation. It is labeled. So they’re meeting the requirements there. I’m going to move on into that.

[00:31:50.52] Explore Plans, menu, collapse.

[00:31:52.23] “Explore Plans menu button, collapse.”

[00:31:54.78] Member Support menu collapsed.

[00:31:56.61] “Member Support menu button collapsed.”

[00:31:58.71] Find a Doctor menu.

[00:31:59.98] “Find a Doctor menu.”

[00:32:01.74] Member Explore Plans menu collapsed.

[00:32:03.81] These are informing me that they’re buttons, that they are menus, and that they are collapsed. So the implication or what it implies to me as a screen reader user is that I should be able to hit Enter on this, it’s going to expand a menu, and then I should be able to move through the menu. I should be able to go up and down through the menu there.

[00:32:22.09] So I’m going to go ahead, and I’m going to press the Enter key on this to try and toggle this menu. And let’s see what we get.

[00:32:26.91] Enter, Explore Plans menu collapsed. Menu, Explore Plans, 1 of 4.

[00:32:32.55] OK. So the menu has now expanded. Note it didn’t actually say “expanded” on there. But now what I’m going to do is I’m going to press the up and down arrow keys.

(DESCRIPTION)
[00:32:42.26] The page scrolls up and down.

(SPEECH)
[00:32:43.88] And you notice nothing is reading out.

[00:32:46.74] So here the menu is actually broken. Getting a little into the technical weeds here, there’s a lot of unnecessary what is called ARIA. ARIA is additional HTML attributes that create accessibility features and functionality.

[00:33:02.15] However, it’s been done wrong here. And so the result is that when I try and toggle this menu, it’s broken. I can’t actually move my focus through these links. There’s no way I can do it.

[00:33:12.14] So this combination of a broken site search and a broken site main nav– how would a user be expected to be able to find content on this website? They really wouldn’t, right? These are the two kind of primary mechanisms by which a user would be expected to quickly navigate or locate content on this website is the site’s main menu and the site’s search. But in this case, here we find accessibility issues with both that are so severe that users may actually not be able to use either one of them.

[00:33:42.63] So that is what it is like navigating a website using a screen reader– fairly straightforward when it comes down to that basic navigation. It’s just forward and backward, up and down arrow keys. Listen to how things read out.

[00:33:56.28] One final point I will make about this issue that I just showed you, the broken site navigation menu, that is an issue that would not be flagged by any form of automated scanning tool that is currently on the market. Do not be misled by fancy marketing lingo that is out there by some overlay tools and other entities in this space.

[00:34:19.26] Automation has a role to play, certainly. We have our own automated scanning tool. We use it to check for some issues that are very difficult to find by manual testing, such as duplicate ID attributes.

[00:34:30.45] However, as soon as you throw interactivity into the mix, as soon as the problem is related to something that requires user interaction, automated scanning tools fail. This issue that I just showed you of the menus not working for screen reader users when they attempt to toggle them, or screen reader users not being informed that the autocomplete has search results, none of those would be flagged by any automated scanning tool that is on the market right now.

[00:34:54.84] Whether they say they are powered by AI or machine learning, insert buzzword here, there simply is not an automated tool out there that can truly identify accessibility issues, especially when interaction comes into play. That is why you must rely on that manual testing alongside your automation in order to be able to determine if something is actually usable.

[00:35:18.81] And at Allyant, all of our audits are conducted in teams of two, consisting of a sighted auditor and a screen reader auditor, the screen reader auditor being somebody like myself, somebody who is blind, who actually uses the screen reading software. And our goal is to ensure that the experience is the same for both users.

[00:35:34.57] So if a sighted user can click this menu and expand it, I, as a screen reader user, should be able to toggle that menu and expand it. And that level of interactivity can’t be found by automation as it stands today.

[00:35:45.57] So that is a nice little example of using websites with the screen reading software. I’m going to switch over now.

[00:35:54.11] 12:39 P–

[00:35:55.07] Just watching our time. I’m going to now switch over to showing you PDF documents. So I’m going to close this window.

[00:36:01.10] Alt-F4, leaving menus. BCBS webinar, PDF file.

[00:36:04.19] My screen reading software reverted its speech back back to its normal rate. So I have three examples here that I’m going to use. And so I’m going to preface this by saying I wanted to get a kind of medical-related document for this presentation.

[00:36:17.03] The Center for Medicare Services has an example “Summary of Benefits and Coverage” document that was actually quite accessible– I thought would make for a great demonstration here. Fun thing got to do is actually take that document and break it because I want to show you what it’s like when you encounter both accessible and inaccessible documents here. And so the actual document that CMS released is very accessible. But I’m going to start by actually showing you broken versions of this because I think it makes more sense when you see it that way.

[00:36:49.57] So I’m going to start by showing you what most users like myself would just call an image-only PDF.

[00:36:56.89] New button collapsed, 1 of 9. Allyant BCBS webinar, [? escape. ?] Name [INAUDIBLE] button. [INAUDIBLE] mode.

[00:37:00.94] Sorry, my tool stopped moving.

[00:37:02.16] [INAUDIBLE] Toolbar [INAUDIBLE]. Search box edit, navigation panes, Sample-Completed– Sample SBC Image PDF. Enter.

(DESCRIPTION)
[00:37:07.14] Opens a PDF.

(SPEECH)
[00:37:08.42] Sample SBC Image PDF-Adobe Acrobat Reader, left paren, 64-bit, right paren. Alert, colon, empty document.

[00:37:12.75] So I have opened up this PDF here, and I’m going to actually–

[00:37:16.52] Yes, pa– pa– page down. No button. Yes button. [INAUDIBLE] No button. [INAUDIBLE] space. Document, page–

[00:37:21.83] So– sorry.

[00:37:23.01] [? Plus ?] version– [INAUDIBLE] escape.

[00:37:25.45] For some reason, the screen reader’s voice keeps jumping back up.

[00:37:27.32] Sample SBC Image–

[00:37:28.26] So what the screen reader says– so I’ve opened up this document. And what the screen reader says here when I try to read the content–

[00:37:35.12] Alert, colon, empty document.

[00:37:36.32] –is “Alert, colon, empty document.” And that’s it.

(DESCRIPTION)
[00:37:41.15] Actually a detailed table.

(SPEECH)
[00:37:42.57] This type of PDF document is very common, actually. And what it results from is usually one of two things. This type of is a result of you took a piece of paper and you ran it through a scanner. You ran it through some scanner, and you chose it to save as PDF.

[00:37:59.51] And what the actual scanner created is an image, not an actual document that contains text. So here all it says is “Alert, empty document,” because there is no text in this document. All there is, as far as the screen reading software is concerned, is a single unlabeled image.

[00:38:17.83] So for me, as a user, in order to get anything useful out of this document, I have to run it through a process called optical character recognition, which scans the image, and it tries to get the text out of the image. And so the quality of that is going to vary, depending on the resolution of the document, the language that it’s in, the spacing– many, many things.

[00:38:39.99] But optical character recognition generally makes it OK, somewhat readable. Usually, there are areas where the user has to kind of intuit it or figure it out because things will be misspelled or something like that. So OCR is not a great solution. But it’s the best that the user has when they run into a situation like this where this PDF document, all it contains is an image here.

[00:39:02.35] So now I’m going to take you– so that’s the bad. That’s the super bad one, right? So now I’m going to show you the OK one, the still bad, but not as bad as bad can be. I’m going to close this document.

[00:39:14.55] Sample-SBC-Image-PDF [INAUDIBLE], [? plus ?] version 2, [? plus ?] version [? 2.0, ?] BCBS Webinar PDF–

[00:39:18.66] And now I’m going to open up the next one. Sample-Completed-SBC-inAccessible-Format, Enter.

(DESCRIPTION)
[00:39:22.29] Opens another PDF.

(SPEECH)
[00:39:23.49] Sample-Completed-SBC-inAccessible-Format-01-2020.pdf-Adobe Acrobat Reader, left paren, 64 [AUDIO OUT]

(DESCRIPTION)
[00:39:28.54] The page is sideways.

(SPEECH)
[00:39:30.81] plan option 1 coverage for colon–

[00:39:31.74] OK. So now I’m going to try and turn the speech rate down again.

[00:39:34.26] Sample [INAUDIBLE] summary of bene–

[00:39:36.63] OK.

[00:39:37.05] Sample-com– Summary of Benefits and Coverage, colon, with this plan– sum– sample– summary of benefits.

[00:39:43.37] Too slow.

[00:39:44.10] Summary of–

[00:39:45.12] OK. So as you can hear, the text is beginning to read out.

[00:39:48.31] Summary of Benefits and Coverage, colon, what this plan covers and what you pay for covered services. Coverage period, colon, 01/01/2022-12-31/2022, insurance company 1, colon.

[00:40:01.77] OK. So as you can see, text is actually reading out here. I am just going to continue letting it read a little bit, though, and let’s hear what we get if we just keep going.

[00:40:11.43] Plan option 1 coverage for, colon, family plan type, colon, PPO, 43, graphic, T-H-E “summer-wye” O-F benefit-S an-D “cuv-er-ag-ee,” left paren, SBC, right paren, “doh-coo-men-tee” “will-el” “help-ee” “yo” you choose your “helt-aitch” plan period.

[00:40:25.65] Notice how basically everything that it is saying right now makes no sense. It is all very broken up and gobbledygook. The words are not complete.

[00:40:35.09] And so this is an example of a PDF where the text is kind of there, kind of readable. It doesn’t flag it as an image-only PDF. However, the text is broken. It’s not reading out correctly. It’s not reading out in a proper order. It’s not reading out with proper spacing. It’s not reading out complete in any kind of way.

[00:40:55.79] So this is better than the image PDF. But as you can see, it didn’t take me very long moving into this document before I started encountering content that wasn’t reading out in any kind of a logical way. So here this document was generated using Print to PDF.

[00:41:14.14] Anytime you encounter something that says “Print to PDF,” be very cautious of it from an accessibility standpoint. Most of your kind of print to PDF drivers that are out there, whether it’s CutePDF, Microsoft XPS, the one that’s built into Windows, pretty much, if you’re printing to PDF, the result is going to be something like this. It won’t be a full-on image, but it’s not going to be tagged. And it’s not going to be particularly readable, either.

[00:41:41.08] So you saw two examples of super bad and not great. And I will tell you, as a user of screen reading software, the majority of PDF documents that I encounter out there on the internet fall into one of these two baskets. So what does it actually look like encountering an accessible PDF? Let’s check that out.

[00:41:59.83] Alt-F4. BCBS–

[00:42:01.51] OK. Sample-Completed-SBC-Accessible-Format-01, Enter. Summary of Benefits and Coverage Completed Example-Adobe Ac–

[00:42:07.39] OK. So here I’ve got this same document open again now, but I’ve got the accessible version. This is the one that CMS actually released. So I’m going to go ahead now. I’m going to slow the speech rate down again.

[00:42:18.17] Heading level 1, summary.

[00:42:19.36] And now I’m going to let the screen reader again just start at the top and work its way down.

[00:42:22.81] Sample-Completed– Sample-Completed-SBC-Accessible– heading level– heading level 1, “Summary of Benefits and Coveragee,” colon, “What This Plan Covers & What You Pay for Covered Servicess,” heading level 1, “Insurance company 1,” colon, “Plan Option 1,” “Coverage Period,” colon, 01/01/2022-12/31/2022, “Coverage for,” colon, “Family,” vertical bar, “Plan type,” colon, “PPO.” “Picture of exclamation point to label important information” graphic.

[00:42:57.61] “The Summary of Benefits and Coverage, left paren, SBC, right paren, document will help you choose a health”– link– “plan. The SBC shows you how you and the”– link– “plan would share the cost for covered health care services. Note”–

[00:43:12.31] So as you can see, the text here that was reading out all broken up in that second document that I showed you is actually reading out correctly. So there’s a few other accessibility features that you might have noticed as you were listening through here. But one of them is that it identified headings.

[00:43:28.90] So I’m just going to jump back up to the top.

[00:43:30.90] Sample– heading level 1, “Summary of Benefits and Coverage,” colon, “What This Plan Covers and What You”–

[00:43:36.10] Summary of Benefits and Coverage, What This Plan Covers.” And it said, “Heading level 1” there. That heading level 1 is a really important thing because that is communicating to me, as a screen reader user, this document has proper headings. And that means I can use those headings to quickly jump to different sections in the document.

[00:43:52.51] Think of it as a sighted user visually skimming a document looking for that big bold text that precedes various sections, right? Having headings that are properly tagged in a document gives me, as a screen reader user, that same ability to quickly see what the various sections of the document are and jump to the one that I want.

[00:44:10.37] So if I just press the H key–

[00:44:11.83] “Insurance Company 1,” colon, “Plan Option 1,” heading level 1.

[00:44:15.64] “Insurance Company 1, Plan Option 1.” Press heading again.

[00:44:19.03] “Excluded Services & Other Covered Services,” colon, heading level 1.

[00:44:23.14] “Excluded Services and Other Covered Services.” So as you can see here, as I press the H key, it’s just jumping me from one section in the document to the next to the next. And that’s the result of this being an accessible tagged PDF. I’m going to check the time again here.

[00:44:37.44] 12:48 P–

[00:44:38.43] 12:48. Give it two more minutes here, and then we’ll switch over for questions. The last thing that I will highlight here is you might have also noticed that it was reading out links.

[00:44:46.17] Heading level 1–

[00:44:47.05] I’m just going to arrow down through here until we get to one of these again.

[00:44:49.53] Heading level 1, “Insurance Company 1,” colon, “Plan Option 1,” “Coverage Period,” colon, 0– coverage– “picture of exclamation point to label important information” graphic.

[00:45:00.00] So here’s a labeled image– “picture of exclamation point to label important information” graphic. So here you can actually see they have an image here, and that image has been properly described and is reading out to the screen reading software. I’m going to keep going, though.

[00:45:11.73] “The Summary of Benefits and Coverage,” left paren, “SBC,” right paren, “document will help you choose a health”– link– “plan.”

[00:45:18.93] Link– “plan.” So as you can see here, the links that are in this document are also being read out to the screen reading software. They’re reading out as labeled, and they’re informing me that they are actually links and that they can perform a function. So that is another advantage of properly tagged PDF is that it informs users that an element is a link, and I could toggle this link, just like a sighted user could, by clicking on it.

[00:45:43.48] And then, finally, tables– I’m going to jump to a table in this document.

[00:45:47.32] Table with three columns and seven rows.

[00:45:50.09] So here we’ve got a table with three columns and seven rows. And let’s see what the columns are.

[00:45:53.71] Answer– “Important Questions,” space.

[00:45:55.90] There’s an “Important Questions” column.

[00:45:57.73] “Answers,” column–

[00:45:59.11] –an Answers column.

[00:46:00.23] “Why This Matters,” colon.

[00:46:01.75] –and a “Why This Matters” column. So I’m going to move down now to the first cell of the second row. So let’s see what the question is.

[00:46:09.37] “What is the overall”– link– “deductible?”

[00:46:11.83] “What is the overall deductible?” So that’s the question. I’m going to move over now to the answer cell.

[00:46:17.50] Answers– dollar 500/individual or dollar 1,000/family. Column 2.

[00:46:23.65] $500 individual or $10,000 family– or, I’m sorry, $1,000. $500 individual, $1,000 family. So what you might have noticed, though, is it actually said “answer” right before the start of it.

[00:46:34.18] That is what is called a column header. And so for this example, it’s not a huge deal if it didn’t read out. It’s fairly clear the question is on the first column. The answer is on the second column.

[00:46:46.60] But imagine if your table just contained nothing but dollar amounts. If your columns are gross amount, net amount, year-to-date amount, tax amount, Q1 amount, Q2 amount, all there is are dollar amounts in these cells. Just reading through the cells, you won’t know what the purpose of them is.

[00:47:02.92] So these column headers, what they do is they remind the user what each column does. So when I moved into the answer column, it actually said, “Answer.” now, if I move to that third column–

[00:47:15.71] “Why This Matters,” colon, “Generally, you must pay all of the costs from”– link– “providers” up to the”– link– “deductible amount before this”– link– “plan begins to pay.”

[00:47:25.15] So there it again reminded me what this column is. This is the “Why This Matters” column. And so this one– not as big a deal if it wasn’t there. But on many, many tables, those column headers are extremely important. A sighted user can just kind of quickly glance up and see what the column was. But a screen reader user can’t do that. They’ve got to have this the column headers properly tagged in the document to remind them what the purpose of each column is.

[00:47:52.15] So that is an example of what it is like with an image-only PDF, a meh, OK text-somewhat-readable PDF, and an actual, fully accessible tagged PDF. I know we are kind of down to our last 10 minutes, I believe, so happy to open it up for questions at this point.

[00:48:17.69] All right. So we do have one question that says, “How many people does Allyant employees for manual screen reader auditors?”

[00:48:27.73] Our audit team is somewhere between 30 and 40. I want to say around 35, 36, somewhere in that area– yeah, but between 30 and 40 auditors on our audit team. And 8 to 10 of them are folks like myself who are blind or low-vision, actual users with disabilities.

[00:48:50.19] Regarding the broken menu, it says, “I already know that my dev team would tell me to press the Tab key instead of the arrows. How would you respond to this pushback?”

[00:49:01.22] Screen reader users don’t use the Tab key. It’s a fundamental misunderstanding of how users navigate. Keyboard-only users use the Tab key because they can see that nonfocusable content like paragraph text. But screen reader users don’t use the Tab key like that.

[00:49:18.84] So it just simply does not work for users of screen reading software. If they need more encouraging, tell them to download a screen reader and try to read a paragraph text with the Tab key. You’ll find that it never reads out. That’s why screen reader users don’t use the Tab key in that way.

[00:49:35.35] The only real exception is when you’re talking forms. If you’re filling out a form, then the users do tend to use the Tab key to move from one text box to the next. But when you’re talking standard web navigation, they don’t use the Tab key in that way.

[00:49:54.46] Are there any other questions? Either you can speak out or put them in the chat. Oh. Go ahead.

[00:50:09.93] Hi. This is Tami Bevan from Premera Blue Cross. And I just wanted to thank you, Aaron, and to Allyant in general. This is my first time seeing a live demo, and talk about eye-opening.

[00:50:25.63] I totally agree. I just wanted to say this out loud versus just in chat, just to be a bit more powerful and tell you just how much I personally really appreciated this demo. Thank you so much.

[00:50:39.40] Oh, you’re very welcome. Thank you so much for inviting us. It was my pleasure.

[00:50:44.20] Yeah. And I would also like to just chime in real quick and say thank you. And for those of you on the call who joined us and watched what Aaron presented, the first time I saw one of these, it was really amazingly eye-opening to me, too, because of the amount of stuff that I take for granted as a sighted person that I couldn’t even conceive of that someone like Aaron goes through.

[00:51:09.64] But the reality is, as he said, the majority of PDFs posted online are like that– either really bad or just “meh” version. And so the document remediation services that we provide to a number of your plans would give you, essentially, the best version of these, like Aaron showed last, where everything is tagged properly, the tags, which is the part the assistive technology reads, is semantically appropriate for what’s in the physical view, and someone who’s blind or has other disabilities for which they use assistive technology, would actually receive the proper information and know how to interact with the document, because every document has a potential for action, which is why you put it out online or send it to begin with.

[00:51:58.35] So we can certainly help you make those fully accessible, and with Best [? in ?] Blue pricing and SLAs as well. Any other questions?

[00:52:21.25] I think we can take silence as golden. I first want to thank everybody for joining, and I especially want to thank Aaron for this amazing presentation and Andy for setting this up as well.

[00:52:35.20] So, again, I’ll reiterate what Andy stated in the beginning. If you guys have any questions or want to learn more, you guys know where to find myself or Andy. But we definitely appreciate your guys’s time today.

[00:52:49.60] I just put my email address in the chat if anybody is interested or has any questions you think of later, or would like to perhaps schedule like an encore presentation of this for your teams internally, whatever it might be, please feel free to reach out to me. We’re here to help, and we’re here to help you make your content and communications more accessible to the people that you serve.

[00:53:16.49] So thanks again for your time, for your attention. It was a pleasure to doing this presentation for you. And for those of you that we already work with, thank you. We appreciate your business.

[00:53:28.31] For those of you that we don’t, hope to be working with you soon. Open enrollment’s right around the corner. Anything that we can do to help, we are here. Thank you.

[00:53:37.36] All righty. Thank you guys. Have a great rest of your day.

[00:53:40.59]