The art of noise (keep talking): Amuse UX

Posted on Thursday, 19 October 2017 by Léonie Watson

Amuse UX conference 2017 was held at the Hungarian Train Museum in Budapest. It was an extraordinary venue for a conference that brought together speakers and participants from around the world, to share and discuss ideas on many different aspects of UX.

I talked about designing conversations with technology. The evolution of synthetic speech, the markup languages we use to make synthetic speech more natural, and using the Inclusive Design Principles to create useful conversations.

The slides from my talk are available:

I received some interesting questions:

Will we ever speak to technology by default?
I think so, yes. I think it’s already happening. When we first began talking to Siri or Cortana, it felt awkward, especially in public. Now it’s becoming more commonplace and so we’re getting used too doing it more and more.
Do you think AI driven synthetic speech will develop its own accent or language?
It’s possible I suppose. There was a thing recently where everyone thought a Facebook chatbot had done just that, but it turned out not to be the case. Language and accents develop through conversation though. So providing humans are always part of that conversation, we’ll stil be an influencing factor on the way AI systems learn and adapt. It could work in reverse too – where we adapt our language based on our conversations with technology

If you’re interested in the evolution of synthetic speech, I learned a lot from The History and State of Speech by Brian Kardell.

About Léonie Watson

Léonie (@LeonieWatson) was Director of Developer Communications at TPG (2013-18), is co-chair of the W3C Web Platform Working Group working on HTML and Web Components, writer for Smashing magazine, and Net magazine, and regular conference speaker.


Comments for this post are closed.