Universal Design for Interactive Systems
Universal design is the process of designing products so that they can be used by as many people as possible in as many situations as possible. This can be achieved by designing systems either to have built in redundancy or to be compatible with assistive technologies.
Universal design principles
In the late 1990s a group at North Carolina State University in the USA proposed seven general principles of universal design . These were intended to cover all areas of design and are equally applicable to the design of interactive systems. These principles give us a framework in which to develop universal designs.
Principle one is equitable use: the design is useful to people with a range of abilities and appealing to all. No user is excluded or stigmatized. Wherever possible,access should be the same for all; where identical use is not possible, equivalent use should be supported. Where appropriate, security, privacy and safety provision should be available to all.
Principle two is flexibility in use: the design allows for a range of ability and preference, through choice of methods of use and adaptivity to the user’s pace, precision and custom.
Principle three is that the system be simple and intuitive to use, regardless of the knowledge, experience, language or level of concentration of the user. The design needs to support the user’s expectations and accommodate different language and literacy skills. It should not be unnecessarily complex and should be organized to facilitate access to the most important areas. It should provide prompting and feedback as far as possible.
Principle four is perceptible information: the design should provide effective communication of information regardless of the environmental conditions or the user’s abilities. Redundancy of presentation is important: information should be represented in different forms or modes (e.g. graphic, verbal, text, touch). Essential information should be emphasized and differentiated clearly from the peripheral content. Presentation should support the range of devices and techniques used to access information by people with different sensory abilities.
Principle five is tolerance for error: minimizing the impact and damage caused by mistakes or unintended behavior. Potentially dangerous situations should be removed or made hard to reach. Potential hazards should be shielded by warnings.Systems should fail safe from the user’s perspective and users should be supported in tasks that require concentration.
Principle six is low physical effort: systems should be designed to be comfortable to use, minimizing physical effort and fatigue. The physical design of the system should allow the user to maintain a natural posture with reasonable operating effort.Repetitive or sustained actions should be avoided.
Principle seven requires size and space for approach and use: the placement of the system should be such that it can be reached and used by any user regardless of body size, posture or mobility. Important elements should be on the line of sight for both seated and standing users. All physical components should be comfortably reachable by seated or standing users. Systems should allow for variation in hand size and provide enough room for assistive devices to be used.
These seven principles give us a good starting point in considering universal design.
Multi-Modal interaction
Multi-modal Interaction is a situation where the user is provided with multiple modes for interacting with the system. Multi-modal systems as those that process two or more combined user input modes such as speech, touch, visual and learning in a coordinated manner with multimedia system output.
Sound in the interface
Sound is an important contributor to usability. There is experimental evidence to suggest that the addition of audio confirmation of modes, in the form of changes in key clicks, reduces errors. Sound can convey transient information and does not take up screen space, making it potentially useful for mobile applications.
Speech in the interface
Language is rich and complex. We learn speech naturally as children ‘by example’ –by listening to and mimicking the speech of those around us. This process seems so effortless that we often do not appreciate its complex structures, and it is not until we attempt to learn a new language later in life, or to make explicit the rules of the one we speak, that the difficulties inherent in language understanding become apparent. This complexity makes speech recognition and synthesis by computer very difficult.
Speech recognition: There have been many attempts at developing speech recognition systems, but, although commercial systems are now commonly and cheaply available, their success is still limited to single-user systems that require considerable training.
Speech recognition offers another mode of communication that may be used to supplement existing channels or be the primary one. When a user’s hands are already occupied, such as in a factory, speech may prove to be the ideal input medium. Speech input does not require the use of a cumbersome keyboard and so in lightweight mobile situations there is a potential role for such systems. It also provides an alternative means of input for users with visual, physical and cognitive impairment as we will see later. Single-user, limited vocabulary systems can work satisfactorily, but the current success rate of recognition for general users and unconstrained language is still low.
Speech synthesis: Complementary to speech recognition is speech synthesis. The notion of being able to converse naturally with a computer is an appealing one for many users, especially those who do not regard themselves as computer literate, since it reflects their natural, daily medium of expression and communication. However,there are as many problems in speech synthesis as there are in recognition. The most difficult problem is that we are highly sensitive to variations and intonation in speech, and are therefore intolerant of imperfections in synthesized speech. We are so used to hearing natural speech that we find it difficult to adjust to the monotonic, non-prosodic tones that synthesized speech can produce.
Uninterpreted speech: Speech does not have to be recognized by a computer to be useful in the interface. Fixed prerecorded messages can be used to supplement or replace visual information. Recordings have natural human prosody and pronunciation, although quality is sometimes low. Segments of speech can be used together to construct messages, for example the announcements in many airports and railway stations.
Non-speech sound
Non-speech sounds can offer a number of advantages. As speech is serial, we have to listen to most of a sentence before we understand what is being said. Non-speech sounds can often be assimilated much more quickly. Speech is language dependend, a speech-based system requires translation for it to be used for another language group. The meaning of non-speech sounds can be learned regardless of language. Speech requires the user’s attention. Non-speech sound can make use of the phenomenon of auditory adaptation: background sounds are ignored unless they change or cease. However, a disadvantage is that non-speech sounds have to be learned, whereas the meaning of a spoken message is obvious (at least to a user who is familiar with the language used).
Earcons
An alternative to using natural sounds is to devise synthetic sounds. Earcons, use structured combinations of notes, called motives, to represent actions and objects. These vary according to rhythm, pitch, timbre, scale and volume.There are two types of combination of earcon. Compound earcons combine different motives to build up a specific action, for example combining the motives for ‘create’ and ‘file’. Family earcons represent compound earcons of similar types. As an example, operating system errors and syntax errors would be in the ‘error’ family. In this way, earcons can be hierarchically structured to represent menus.
Touch in the interface
Touch is the only sense that can be used to both send and receive information. Although it is not yet widely used in interacting with computers,there is a significant research effort in this area and commercial applications are becoming available
The use of touch in the interface is known as haptic interaction. Haptics is a generic term relating to touch, but it can be roughly divided into two areas: cutaneous perception, which is concerned with tactile sensations through the skin; and kinesthetics, which is the perception of movement and position. Both are useful in interaction but they require different technologies
Handwriting recognition
The idea of being able to interpret handwritten input is very appealing, and hand-writing appears to offer both textual and graphical input using the same tools. There are problems associated with the use of handwriting as an input medium.
Recognizing handwriting
The variation between the handwriting of individuals is large moreover, the handwriting of a single person varies from day to day, and evolves over the years.
This is so difficult to achieve reliably that there are no systems in use today that are good at general cursive script recognition.However, when letters are individually written, with a small separation,the success of systems becomes more respectable, although they have to be trained to recognize the characteristics of the different users. If tested on an untrained person,success is limited again.
Gesture recognition
Gesture is a component of human–computer interaction that has become the subject of attention in multi-modal systems. Being able to control the computer with certain movements of the hand would be advantageous in many situations where there is no possibility of typing, or when other senses are fully occupied. It could also support communication for people who have hearing loss, if signing could be ‘translated’ into speech or vice versa. But, like speech, gesture is user dependent, subject to variation and co-articulation. The technology for capturing gestures is expensive.
Designing for diversity
Designing for users with disabilities
Employers and manufacturers of computing equipment have not only a moral responsibility to provide accessible products,but often also a legal responsibility. In many countries, legislation now demands that the workplace must be designed to be accessible or at least adaptable to all.
Visual impairment
The sensory impairment that has attracted the most attention from researchers, perhaps because it is potentially also one of the most debilitating as far as interaction is concerned, is visual impairment. The rise in the use of graphical interfaces reduces the possibilities for visually impaired users. In text-based interaction, screen readers using synthesized speech or braille output devices provided complete access to computers: input relied on touch-typing, with these mechanisms providing the output.However, today the standard interface is graphical.
There are two key approaches to extending access: the use of sound and the use of touch. A number of systems use sound to provide access to graphical interfaces for people with visual impairment.
Hearing impairment
Compared with a visual disability where the impact on interacting with a graphical interface is immediately obvious, a hearing impairment may appear to have little impact on the use of an interface. After all, it is the visual not the auditory channel that is predominantly used. To an extent this is true, and computer technology can actually enhance communication opportunities for people with hearing loss. Email and instant messaging are great levellers and can be used equally by hearing and deaf users alike.
Gesture recognition has also been proposed to enable translation of signing to speech or text, again to improve communication particularly with non-signers.
Physical impairment
Users with physical disabilities vary in the amount of control and movement that they have over their hands, but many find the precision required in mouse control difficult.
Speech input and output is an option for those without speech difficulties.
An alternative is the eyegaze system, which tracks eye movements to control the cursor, or a keyboard driver that can be attached to the user’s head. If the user is unable to control head movement, gesture and movement tracking can be used to allow the user control.
Speech impairment
For users with speech and hearing impairments, multimedia systems provide a number of tools for communication, including synthetic speech and text-based communication and conferencing systems . Textual communication is slow, which can lower the effectiveness of the communication. Predictive algorithms have been used to anticipate the words used and fill them in,to reduce the amount of typing required. Conventions can help to provide context,which is lost from face-to-face communication, for example the ‘smilie’ :-), to indicate a joke. Facilities to allow turn-taking protocols to be established also help natural communication . Speech synthesis also needs to be rapid to reflect natural conversational pace, so responses can be pre-programmed and selected using a single switch.
Designing for different age groups
Older people
The requirements of the older population may differ significantly from other population groups, and will vary considerably within the population group. The proportion of disabilities increases with age: more than half of people over 65 have some kind of disability. Just as in younger people with disabilities, technology can provide support for failing vision, hearing, speech and mobility. New communication tools, such as email and instant messaging, can provide social interaction in cases where lack of mobility or speech difficulties reduce face-to-face possibilities.Mobile technologies can be used to provide memory aids where there is age related memory loss.
Some older users, while not averse to using technology, may lack familiarity with it and fear learning. They may find the terminology used in manuals and training books difficult to follow and alien. Interests and concerns may also be different from younger users.
In spite of the potential benefits of interactive technology to older people, very little attention has been paid to this area until recently. Researchers are now beginning to address issues such as how technology can best support older people, what the key design issues are, and how older people can be effectively included in the design process, and this area is likely to grow in importance in the future.
Children
Children have distinct needs when it comes to technology, and again, as a population, they are diverse. The requirements of a three year old will be quite different from those of a 12 year old, as will be the methods that can be used to uncover them.Children are, however, different from adults, and have their own goals and likes and dislikes. It is therefore important to involve them in the design of interactive systems that are for their use, though this in itself can be challenging as they may not share the designer’s vocabulary or be able to verbalize what they think. Design approaches have therefore been developed specifically to include children actively as members of the design team.
Children are included in an intergenerational design team that focuses on understanding and analyzing context. Team members, including children, use a range of sketching and note-taking techniques to record their observations. Paper prototyping, using art tools familiar to children,enables both adults and children to participate in building and refining prototype designs on an equal footing. The approach has been used effectively to develop a range of new technologies for children.
Younger children may have difficulty using a keyboard ,and may not have well-developed hand–eye coordination. Pen-based interfaces can be a useful alternative input device.
Interfaces that allow multiple modes of input, including touch or handwriting, may be easier for children than keyboard and mouse. Redundant displays, where information is presented through text, graphics and sound will also enhance their experience.
Designing for cultural difference
Cultural difference is often used synonymously with national differences but this is too simplistic.Whilst there are clearly important national cultural differences, other factors such as age, gender, race, sexuality, class, religion and political persuasion, may all influence an individual’s response to a system. This is particularly the case when considering websites where often the explicit intention is to design for a particular culture or subculture.
Clearly, while all of these contribute to a person’s cultural identity, they will not all always be relevant in the design of a given system. However, we can draw out some key factors that we need to consider carefully if we are to practice universal design. These include language, cultural symbols, gestures and use of color.
Symbols have different meanings in different cultures.The rainbow is a symbol of covenant with God in Judeo — Christian religions, of diversity in the gay community and of hope and peace in the cooperative movement. We cannot assume that everyone will interpret symbols in the same way and should ensure that alternative meanings of symbols will not create problems or confusion for the user. The study of the meaning of symbols is known as semiotic sand is a worthwhile diversion for the student of universal design
Colors are often used in interfaces to reflect ‘universal’ conventions, such as red for danger and green for go. But how universal are these conventions? In fact,red and green mean many different things in different countries. As well as danger,red represents life (India), happiness (China) and royalty (France). Green is a symbol of fertility (Egypt) and youth (China) as well as safety (Anglo-American). It is difficult to assume any universal interpretation of color but the intended significance of particular colors can be supported and clarified through redundancy– providing the same information in another form as well.