TPGi at the W3C Web of Things Plugfest, March 2018

I recently attended the Spring 2018 edition of the W3C Web of Things (WoT) PlugFest in Prague, the Czech Republic (or Czechia, if you will). This event formed part of the wider 2018 W3C Web of Things IG/WG joint F2F meeting, sponsored by Siemens, held during the last week of March. It was my first time attending a Web of Things event (and indeed any form of W3C event!), but my initial trepidation was quickly overcome by thoughts of how much of what I saw could be applied within an accessibility context – and the potential for future accessibility research and application within this rapidly expanding area.

The PlugFest itself was sponsored by Oracle, and hosted in their Czechia office a few miles west of downtown Prague. Aside from a small number of minor teething problems one would expect when trying to get multiple networked devices to talk to each other in a single room, the organizers did a fantastic job ensuring we were all set up, helped to ensure everything was working, and we were well fed and watered throughout the day – so I’d personally like to thank them for that!

In this post, I will summarize my experience of the PlugFest – for a more complete overview, I encourage you to check out the full minutes for the W3C WoT F2F which cover not only the PlugFest, but other activities and events that occurred over the course of the week.

What is a PlugFest?

The purpose of WoT PlugFests is to bring together representatives from various industrial, commercial and academic organizations in a semi-formal environment, who then put together proof-of-concept demonstrations based upon the current specifications.

To this end, tables were strewn with network cables, hubs, virtual assistants, speakers, webcams, and slightly quirky objects such as lamps and other household objects. Participants then spent their time putting together thing “mashups” to view/control one or more of these different devices based on their thing description (TD). It certainly is a surprising experience when the (network connected) lamp on the table in front of you suddenly lights up seemingly of its own accord – only to find it is being controlled by another person or device sitting 2-3 tables away.

Given my lack of practical experience in this area, I spent most of my time observing and soaking up knowledge, although – and thanks to Soumya Kanti Datta from Eurecom – I did manage to write my first ever routine to get JAWS to announce the status of a car’s rear door (open or closed). Baby steps, maybe, but everyone has to start somewhere!

So, apart from suddenly realizing that my development skills are very rusty these days, what did I learn – and particularly in terms of accessibility?

WoT User Interface?

My main takeaway from the PlugFest was one of change. I left feeling I needed to reevaluate my understandings of the whole concept of the UI, accessibility, and accessibility standards.

Although obviously early demonstrative hacks and mashups rather than “complete” products, very few of the proofs-of-concept I encountered appeared to adopt or even require any form of the traditional laptop/desktop/mobile device as the basis of the user interface (UI). Instead, one could almost imagine that any future UI could take on a totally different, but simplified, form – for example, voice as the primary form of input/output, physical buttons and switches, sensors, lights and so on.

No browser, no keyboard, no tablet, no phone.

That said, this concept wasn’t particularly new for me personally – and, indeed, we’re already starting to see products of this type enter the marketplace. While undertaking my PhD in the mid-2000s, I regularly encountered the argument that we were moving away from the desktop-based, direct manipulation, paradigm of human-computer interaction, and towards an environment where new interaction strategies were required as hand-held, wearable, and other mobile devices are introduced. Indeed, since the advent of the iPhone, Android and other small-screen technologies in the following years, many of the original proposals have come to pass, and we have since devised many new ways of interacting with technology that we simply did not have (commercially, at least) to the same extent ten years ago (for example, touch, swipe, pinch-to-zoom and so on).

Perhaps what I encountered is simply the next natural step towards the invisible computer that Don Norman proposed twenty years ago.

WoT Assistive Technologies?

Much of what I think about on a day-to-day basis with regards to accessibility relates to web accessibility. I, and my colleagues, predominately work with websites, web applications, and native applications that are often indistinguishable from web applications (and vice versa).

Consequently, those of us who work in this sphere often think of accessibility in terms of concepts such as browser support, the accessibility API, and web-specific issues such as page structure, keyboard operability and so on.

But what if the “thing” has none of those? Why, for example, would a lamp have a browser? Why would a car door have a keyboard? How would one connect a screen reader to a secure entry system in an apartment block?

WoT Accessibility (guidelines)?

Of course, browsers, phones, tablets and so on are not going away any time soon – and indeed many WoT-networked things may eventually be controllable using an app (native or web-based) or through multiple apps on one’s phone, so we can’t let go of the Web Content Accessibility Guidelines that easily either.

Yet, as new WoT-based devices are introduced to the market that do not rely on traditional paradigms, we may have to broaden our horizons to consider the accessibility challenges “things” pose that may not necessarily be covered by current accessibility standards, or available to current assistive technologies.

For example, my colleague Léonie Watson highlights that, while voice input technology has improved over the years, it can be a somewhat laborious process trying to get the device to understand phrases, and can also be difficult (if not impossible) for users to understand the output. Léonie therefore proposes five tips for building accessible conversational interfaces, such as keeping the language simple, avoiding idioms and colloquialisms, and providing a comparable experience for those who may not be able hear or to communicate with the device (such as providing a text transcript on screen). How can we formally evaluate such technologies, in the same way we evaluate web content against established accessibility guidelines?

WoT Future?

As I’ve mentioned in previous posts on this subject, the WoT interest and working groups are primarily interested in machine-to-machine interaction. Yet, even at this level, we can start thinking about accessibility implications. My plan over the next few months is to start thinking about accessibility use cases, and the extent to which they relate to ongoing work within the WoT interest and working groups. I would be interested to hear from anyone who has any thoughts – feel free to leave a comment below!

Categories: Technical

Comments

Joanne Lastort says:

This reminds me a little of Skype. Skype has an app that allows the deaf to converse with a hearing person who does not know sign. The hearing person speaks, and Skype Translator converts the speech into instant text. You could pair that with Alexa and a person who’s deaf could sign to Alexa.