A Tale of Two Rooms: Understanding screen reader navigation

Posted on Thursday, 25 January 2018 by Ryan Jones

Street view of panaderia (bread shop) in Spain, with a closed garage type door and an open shop door

For those of us who use screen reading software such as JAWS, NVDA, or VoiceOver to access information on the Web, the user experience can be quite different from those who can visually see the content. One of my goals throughout the many accessibility focused training classes I have led has been to help others more accurately understand what it is like for someone using screen reading software to navigate through a web page.

French translation: Il était une fois 2 pièces: comprendre la navigation avec un lecteur d’écran by Damien Pobel.


It is common place for most people to want to jump straight into the technical details:

  • What keystrokes do I press?
  • Which screen reader should I test with?
  • What browser should I use?

While these are all important considerations, it is best to first step back and ask:

What is the experience like and how can I simulate that experience if I can see the screen?

To that end, I would like to present several illustrations that have been effective for communicating answers to these questions.

An open door

Let’s set the stage for our first illustration. Imagine you have just opened a door and are looking into a large conference room. In the center of the room is a large conference table with 10 chairs (5 on each side of the table). Seated at the table are 2 men and 2 women. All 4 people are seated at the same side of the table (they are facing the door you are standing at). On the far side of the room (behind the people seated at the table) are 3 large windows that look out over a courtyard with benches, flowers, and small trees. On the right side of the room is a counter with a coffee pot and microwave sitting on it. The left side of the room has a large flat screen television mounted on the wall.

Assuming you are not already familiar with the layout of this room, what is the first thing you would do upon opening the door? Some of you might visually scan the room from left to right. Some might scan from right to left. Others might first look at the table in the center and then scan the perimeter of the room. No matter how you do it though, most of you would in some way scan the room with your eyes to get a quick sense of the layout and contents of the room. The scan might only take a couple of seconds and most of you won’t even realize you did it. You might then focus in on certain elements of interest such as the people sitting at the table or the large flat screen television.

A darkened room

Now, let’s re-imagine the scene and this time when you open the door, the room is completely dark. No light is present and you can see absolutely nothing at first glance. You have been given a small flashlight though and when you switch it on, the light allows you to see a small area at a time. The area you can see is a small circle about 2 feet in diameter and nothing outside that circle is illuminated.

How would you now observe the contents of the room?

Some of you might move the light back and forth from left to right starting at your feet and moving away from you. Some of you might start from the back of the room and move the light toward you while others might randomly point the light at various places in the room with no particular pattern. As you move the light around the room you will need to build a mental map or image of what is in the room and how it is laid out. Building this mental map will take significantly longer than visually scanning the room when all the lights were on. As you move the flashlight around, you will need to remember each thing you have seen and how they all relate together. If you forget where something is located, it will take more time to locate it.

Was the counter with the coffee pot on the right side of the room or was it in the back? How many people were sitting at the table? Was it 4 or 5?

Answering these questions when you can see the entire room at once will take little effort but answering them when you can only see a small area at a time will take much longer.

An analogous scenario

This second scenario is analogous to how a screen reader user reviews a web page or smartphone app.

While keyboard commands or touch gestures can move the screen reading software around the page, it is only possible to read one thing at a time. There is no way for the visually impaired user to get a quick (1 to 3 second) overview of the page similar to what someone who can see the screen might do.

Fortunately, if accessible page navigation techniques such as headings or regions are used, this can help the screen reader user focus in on certain areas of the page.

Stepping back to our dark room scenario from above, imagine there are now small red dots of light on key elements in the room such as the table, counter, television, and each person sitting at the table. You still have to use the flashlight to look around, but the red dots give you an idea of where the most important things might be located.

Another significant challenge that screen reader users may face is dynamically changing content on a page. Returning to our light and dark room examples from above, pretend that one of the men gets up and moves to the other side of the table. There are now 2 women and 1 man on one side of the table and 1 man on the other side. In the lit room example, you would most likely notice the movement as it happens. Even if you weren’t looking directly at the man who moved, you would probably notice movement out of the corner of your eye and then turn to see what was happening. In our dark room scenario it would be very difficult to know that anything happened unless you happened to have the flashlight beam directly on the man who moved at the exact right time. It is more likely you would never know he moved until sometime later when you happen to move the light over the chair he vacated.

This in effect is what happens when page content changes but does not alert screen reading software. The user may never know that something changed on the page unless they happen to move across the new information and realize that it is now different.

This challenge is best solved by ensuring the dynamic content uses techniques such as alerts or live regions which cause the screen reading software to announce the updated information to the user.

In our dark room scenario above, the man might verbally announce that he is moving from one side of the table to the other. Even if your light wasn’t on him, you would hear the announcement and better understand what is changing.

In one final illustration let’s take the windows that look out to the courtyard. In the lighted room scenario you would be able to quickly see that the windows face a courtyard with benches, flowers, and trees. In the dark room example though, even if you pointed the light at the windows, you would not be able to see what was outside. This illustrates visual elements such as images that do not have a text label associated with them. For example, screen reading software can identify that an image is present on a page but the only way it can communicate information about the image is through the alternative text label that can be assigned. Without the text label, the screen reader user would have no idea what the image is showing. In our dark room example, a sign might be placed next to the windows with a description of what appears outside. When you locate the windows with your light, you would then be able to read the sign.

Better understanding

One of the best ways to better understand the screen reader user experience is to try it yourself. It is certainly advantageous to try using screen reading software on your own to navigate web page content. In addition though, here is a simple exercise you can do to simulate the scenarios illustrated above. (I don’t recommend finding random conference rooms with people in them and turning out all the lights!)

  1. Print out a paper copy of a web page. I recommend one that is not too large but contains a variety of elements such as text, links, menus, etc.
  2. Find a blank sheet of paper and make a small hole in the center of it. The hole should be about the size of 2 or 3 words (around a half inch in diameter is usually sufficient).
  3. Place the paper with the hole in it over the printout of the web page and try making sense of what is there. Slide the paper with the hole in it around in order to read the contents of the web page print out below.

It will probably be very difficult and time consuming to understand what is on the page but this gives you a general idea of what it is like for a screen reader user (especially if no page navigation techniques are used).

About Ryan Jones

Ryan is a project manager and trainer for TPG. He has been delivering assistive technology training and consulting for over 10 years and has worked with many companies and government agencies to help make their technology more accessible. Ryan has also worked with various companies to make sure their kiosk machines are accessible and usable by all.

Comments

  1. that was the most clear and substantial explanation about how a screen reader works. now I’ll have something new to teach to my computer science students!

  2. Thank you so much for writing this. It clarified a lot of the questions I’ve had on mapping and perception, and the use of screen reader.

  3. Great article that gets to the real challenge at hand. When I’m developing I use two monitors. Code on one, browser on the other. I then turn the brightness right down (sometimes off) on the screen with the browser. So I can still see my code, but still properly test the effect.

  4. Excellent post. Someone once shared with me this apt perspective, that we are all just ‘temporarily-abled’, someday you or I could need assistive software or hardware to consume information online. As a web developer and accessibility consultant, I believe we need to get our industry standards, browser apps and tools, and web design and content creation software to be a lot smarter, and bake accessibility in at the core (not bolted on as an afterthought as it often is now). This way designers can design, content creators can build, and all users can consume the information served on the web in the best experience possible no matter the method of consumption. Until that day, we need to be doing whatever work it takes to make websites accessible for all.

  5. Great post — loved the analogy! And also thank you to John O’Neill for sharing the Funkify Simulator. I’ll also do the paper print out with the hole exercise!

Leave a Reply

Your email address will not be published. Required fields are marked *