Ubiquitous User Interfaces (UUI)

Speaker:  Aaron Quigley – St. Andrews, Scotland Uk
Topic(s):  Human Computer Interaction

Abstract

UbiComp or Ubiquitous Computing is a model of computing in which computation is everywhere and computer functions are integrated into everything. It can be built into the basic objects, environments, and the activities of our everyday lives in such a way that no one will notice its presence (Weiser, 1999]. Such a model of computation will “weave itself into the fabric of our lives, until it is indistinguishable from it” (Weiser, 1999). Indeed, everyday objects will be places for sensing, input, processing along with user output (Greenfield, 2006). Within such a model of computing, we need to remind ourselves that its the user interface which represents the point of contact between a computer system and a human, both in terms of input to the system and output from the system. 

There are many facets of UbiComp from low-level sensor technologies in the environment, through the collection, management, and processing of context data through to the middleware required to enable the dynamic composition of devices and services envisaged. These hardware, software, systems, and services act as the computational edifice around which we need to build our Ubiquitous User Interface (UUI). The ability to provide natural inputs and outputs from a system that allows it to remain in the periphery is hence the central challenge in UUI design and the focus of this talk.  

Today displays and devices are all around us, on and around our body, fixed and mobile, bleeding into the very fabric of our day to day lives. Displays come in many forms such as smart watches, head-mounted displays or tablets and fixed, mobile, ambient and public displays. However, we know more about the displays connected to our devices than they know about us. Displays and the devices they are connected to are largely ignorant of the context in which they sit including knowing physiological, environmental and computational state. They don’t know about the physiological differences between people, the environments they are being used in, if they are being used by one or many.

This talk considers our display environments as an ecosystem and asks how we might model, measure, predict and adapt how people can use and create UUIs in a myriad of settings. With modeling we seek to represent the physiological differences between people and use the models to adapt and personalise designs, user interfaces. With measurement and prediction we seek to employ various computer vision and depth sensing techniques to better understand how displays are used. And with adaptation we aim to explore subtle techniques and means to support diverging input and output fidelities of display devices. Our ubicomp user interface is complex and constantly changing, and affords us an ever changing computational and contextual edifice. As part of this, the display elements need to be better understood as an adaptive display ecosystem rather than simply pixels if we are to realise a Ubiquitous User Interface. 

About this Lecture

Number of Slides:  80
Duration:  55 minutes
Languages Available:  English
Last Updated: 

Request this Lecture

To request this particular lecture, please complete this online form.

Request a Tour

To request a tour with this speaker, please complete this online form.

All requests will be sent to ACM headquarters for review.