The Moose with hand
Abstract:This paper presents our work to date on a haptic interface whose immediate aim is to provide access for blind computer users to graphical user interfaces. In this presentation, we describe the hardware and supporting software which together reinterprets a Microsoft Windows screen for the haptic senses. Screen objects such as windows, buttons, sliders, and pull-down menus are mapped to the workspace of a two-axis haptic interface called the Moose where they emerge as patches and lines of varying resistance. The Moose operates much like a mouse except that it is able to move under its own power and thereby make apparent touchable virtual objects. Thus presented to the hand. interface objects may be located. identified, and even manipulated or activated. Using Microsoft Windows as a test bench, we have proven the feasibility and usefulness of the haptic interface approach for non-visual computer access. Extensions to haptic browsing of the Web are discussed.
1: Introduction'Haptics' refers to the human tactile (cutaneous) and kinesthetic (muscle movement) senses. A haptic interface is a computer-controlled motorized device to be held in the hand by a user, which displays information to that user's haptic senses. It is an extremely powerful modality for interface design because the same device can be used for both displaying output from the computer and accepting input from the user. Moreover, using haptics significantly reduces the burden on the other senses such as vision and audition, thereby freeing these channels for other tasks. In certain instances it is. we believe. possible to completely substitute haptics for other sensory modalities. In this way, graphical information displayed on a computer's screen can be made accessible to blind persons who at the moment are denied access to standard Graphical User Interfaces (GUls). For example, by producing forces on the user's hand which are a function of both the user's motions and properties of the icons under the cursor, touchable representations of the screen objects can be created. In particular, we are interested in applying haptics as a sensory substitute for the graphical interfaces of today's desk-top computer applications, includin2 web browsers.
A handful of other research groups are working on powered-mouse type interfaces for GUIs. Christophe Ramstein and Vincent Hayward have designed the Pantograph. a two-axis haptic interface [Ramstein 1994] for which they are presently developing Microsoft Windows access software. They are also interested in blind user access. In Wisconsin, the TRACE Group is developing a computer access tool for the blind based on haptics [Wiker 1991]. At the university of Tokyo, a mouse with both vibrotactile and force feedback has been developed [Akamatsu, Sato 1994]. At Stanford in the 1970s the Optacon was developed by John Linvill to allow blind persons to read printed text. The Optacon consists of a camera and an array of motorized pins, and stands as one of the first commercialized haptic display devices [Linvill 1973]. Other access devices for the blind and deaf-blind based on haptics have also been developed [Loubal 1992], [York 1989] [Frisken-Gibson 1987] [Eberhardt 1993] [Kelly, Salcudean 1994].
2: Our Prototype InterfaceWe have designed and built a working prototype which serves to prove the concept: the Moose. The Moose has generated quite a bit of enthusiasm from those who have seen (felt) it. It is basically a powered mouse, giving the user the ability to feel the screen objects under the mouse cursor. This mouse is used to navigate the screen like a regular mouse; but by reflecting forces (produced by its motors) back to the user, it presents a haptic representation of the symbols on the screen. In other words. as the puck of the Moose is moved. the mouse cursor moves. If the cursor should alight on an icon, a haptic representation of that icon is presented by the Moose for the user to feel and explore. The edge of a window, for example, is realized haptically as a groove. A check box is represented as a border with a repelling block at one end which becomes an attracting spring when the checkbox is "checked". Thus by moving the cursor over haptic objects the user can simultaneously identify their type and check their current status. Once a desired icon has been found, it may be selected using a typical mouse button located on the Moose puck. Thus one can explore a screen. activate 'buttons' from menus. and select other screens which in turn will be mapped and haptically displayed. For text which appears on the screen (file names. button labels, etc..) we have experimented with integrating a speech synthesizer into the interface.
As well as rendering objects belonging to the existing GUI metaphor. we are exploring the presentation of altogether new information through haptics. For example. the next appropriate user action in a given context can be indicated by causing the powered mouse to Gravitate to a particular icon. Through haptics. we can truly represent the "drag and drop" metaphor by increasing the apparent mass of the Moose's puck when an object is being "dragged" and decreasing the apparent mass of the puck once the object has been "dropped". In this way, we are able to relate the mass added to the puck to the relative size of the object being dragged - the size in bytes of a file, the number of documents in a folder. etc. - information which is not so apparent in standard GUI implementations.
3: HardwareFigure I shows the hardware components of our present planar haptic interface. the Moose. The puck or manipulandum under the user's hand is coupled to two linear voice coil motors through two perpendicularly oriented flexures. The workspace is 3 cm square while the device footprint is 33 cm square and height is 5 cm. The effective mass in each direction is 95 drams while the maximum force output is about 6 Newtons. The workspace of the current Moose is limited by the linear motors. Future designs will utilize rotary motors and capstan drives. realizing large workspaces vet maintaining a low profile package.
The unique feature of our hardware design is this double flexure. On the present prototype. the double flexure is executed in two pairs of 7-cm strips of spring steel. The double flexure conveniently decouples the 2-axis motion of the puck into two single-axis motions at the linear motors. Moments and vertical forces are resisted, yet translations in the horizontal plane are transmitted directly to the motors by the manipulandurn. The kinematics of this device are simple and very nearly linear, making forward and inverse kinematic calculations unnecessary. Furthermore, the work-space is flat, square like a mousepad. and free of singularities. The entire workspace is also naturally counterbalanced. Over-limit forces will cause buckling of the flexures, which we consider a safety feature. The only real disadvantage of the double flexure design is added hi-h-frequency structural resonances inherent in the flexures themselves. These resonances will bandlimit the display capabilities. But since they can be chosen high enough by design, they do not interfere with haptic object images.
A simple Digital 1/0 card provides for PC-bus communication to four 12-bit DACs and four quadrature counters. The voltage outputs of two DACs, ranging -/- 5 Volts, feed to two transconductance amplifiers based on the LM 12 power op amp and in turn to the motors. A linear position encoder, 150 lines per inch. reads position on each of the motors while the count circuit maintains an up-to-date binary representation of position. Other digital switch inputs such as buttons can be polled from software.
Finally, a speech synthesizer linked through the serial port is available for text output. Future hardware enhancements will include the following: 1) a Braille display to take the place of the speech synthesizer for text output, 2) the use of Braille cells for "shape display", and 3) the use of a small voice coil motor for vibration and texture display [Kontarinis 94].
Image of the Moose, with hand on the puck.
4: SoftwareVarious control routines which create haptic effects such as virtual springs. textures, and buttons have been developed and incorporated into our Windows interface. By combining these primitives we have begun to construct a library of "hapticons" each of which correspond to a standard Windows icon. For example, our haptic checkbox has a frame surrounding the checkbox text and a detent corresponding to the checkbox state indicator. Just as the state indicator changes color when the checkbox is checked, so also our haptic checkbox state indicator changes from an attracting force field to a repelling field which is immediately apparent when the profile of the checkbox is examined. Our software is divided into three distinct modules:
The Icon Management ClassThis module is responsible for "mapping" each new screen as it appears and storing information. such as icon dimensions and icon names, about each of the icons it finds in a linked list.
The Hapticon Management ClassThis module queries the icon manager, using the obtained information to construct a list of corresponding hapticons. It encapsulates the haptic properties of each hapticon in control laws and lookup tables for convenient use by the control module.
The Control ModuleThe control module is responsible for executing the control loop. Its action is embedded in the Windows message loop. It polls the current Moose position, Moose button status, and current window identifier. If the current window has changed, it initiates the mapping of the new window by the icon manager and requests the hapticon manager to update its hapticon list. The control module constantly passes the current Moose position to the hapticon manager and receives a force appropriate for whatever icon lies at that position. The control module then outputs that force to the Moose. Moose button clicks and moves are passed through to the Windows mouse button control routine.
5: Specific Goals and Extensions to Web BrowsingWe are currently experimenting and developing a palette of haptic effects which will be used to explore and allow comparisons among various haptic substitutes for graphic objects- i.e. detents for buttons. solid blocks for inaccessible objects, compliant and non-compliant borders for windows, and so on. We hope that this research will result in a characterization of graphic interface objects and that a corresponding library of haptic effects for representing these objects will be created. In time we hope that a common practice for haptic interface design will arise.
The project has attracted the interest of Neil Scott and his team at Stanford's Center for the Study of Language and Information, who are currently incorporating our work into their Total Access Port (TAP) system. TAP is aimed at developing a generic adaptive interface port through which interface device signals can be intercepted and hence made available to whatever access device a disabled user finds most appropriate. The system's broad goal is to provide the individual with one personalized interface which they can bring with them to whatever computer they need to use. The haptic interface is a very realistic option for conveying the contents of a GUI to a blind person. Scott's team will substitute our icon management module with their own vision recognition routines which will obtain the icon information directly from the video signal rather than from the Windows environment. The advantage of this system is that it will be platform-independent and will thus allow blind people to use the same access device for any number of computers running any type of operating system.
Future work with the Moose includes extensions of our software to haptic browsing of the web. As Microsoft Windows icons on the screen may be queried as to their type, location, and size, so may the elements of a typical web page. Certain properties of images, forms, frames and the like mav be ascertained and used to create correlates within the workspace of the Moose. In fact, the pertinent information is quite readily available within the HTML code itself. The creation of a web browser capable of haptic display simply involves re-writing those parts of a web browsing application which draw onto the screen according to the parsed HTML code. Instead, "drawing" would take place onto the Moose's workspace, using an Application Programmers Interface provided with the Moose much like the Hapticon Management Class described above.
6: SummaryA new motorized mouse takes over the function of the conventional mouse while conjoining the output role of the screen. With our powered mouse, a blind user can navigate and interact through an application's window. Sighted users may also realize advantages in speed and dexterity.
Using Windows Internals, we have transcribed the visual information of the screen and made it available to the haptic senses. The Windows environment enables inquisitive software such as ours to access all information on the screen. Our software simply gathers that information and displays it haptically. HTML code also contains the descriptive information necessary to create a haptic rendering of a web page. The real advantage of our haptic interface over a speech screen reader is that information about the window or web page topology is presented directly and immediately rather than through time-consuming descriptive Language.
Haptic icons, like graphic icons, are created entirely in software which means that they can be modified and customized according to the needs of developers and end users. In undertaking this work, it is our hope that the basic building blocks we develop, the haptic icon library, will become the basis of further and much more comprehensive interfaces both for blind computer users and as supplements to existing GUIs for sighted users as well.
- Akamatsu, M., Sato, S. and MacKenzie, I.S. "Multimodal mouse: A mouse-type device with tactile and force display," Presence, Vol 3) No. 1,Winter 1994 pp 73-80.
- Eberhardt, S.P., et al. "OMAR: a haptic display for speech perception by deaf and deaf-blind individuals." Proceedings of IEEE Virtual Reality Annual International Symposium. Held: Seattle, WA. USA, 18-22 Sept. 1993. p. 195-201
- Edwards, A.D.N. "Soundtrack: an auditory interface for blind users." Human Computer Interactions vol.4, no. 1, 1989 p. 45-66,
- Frisken-Gibson, S.F., et al. "A 64-solenoid, four-level fingertip search display for the blind." IEEE Transactions on biomedical engineering. (Dec. 1987) vol.BME-34, no. 12. p. 963-5.
- Kelly, A.J., and Salcudean, S.E. "On the development of a force-feedback mouse and its integration into a Graphical user interface," in Proceedings ASME Winter Annual Meeting, DSC-Vol. 55-1. 1994.
- Kontarinis, D. A. and Howe, R. D. "Tactile display of vibratory information in teleoperation and virtual environments," Presence, 4(4):387-402, 1995.
- Linvill, J. "Research and development of tactile facsimile reading aid for the blind: the optacon," (Washington] U.S. Dept. of Health, Education and Welfare. Office of Education. Bureau of Education for the Handicapped, 1973.
- Loubal, P.S. "Fingertip maptracing devices for the blind," Proceedings of Conference on Technology and Persons with Disabilities. Held: Los Angeles. CA, USA, 18-21 March 1992. p. 315-18
- Minsky, M. "Computational Haptics: Texture," PhD Thesis, MIT Media Labs, 1995.
- Ramstein C. and Hayward V. "The Pantograph: A large workspace haptic device for a multi-modal human computer interaction," In CHI'94, Conference on Human Factors in Computing Systems ACM/SIGCHI, Boston, MA, April 1994.
- Wiker, S.F., et al. "Development of tactile mice for blind access to computers: importance of stimulation locus, object size, and vibrotactile display resolution," Proceedings of the human factors society 35" annual meeting. vol. I Held: San Francisco, CA, USA. 2-6 Sept. 1991. p. 708-12
- York, B.W., et al. "Tools to support blind programmers" 17th Annual ACM Computer Science Conference. Held: Louisville. KY, USA, 21-23 Feb. 1989. p. 5-11
©1997 Sile O'Modhrain. All Rights Reserved.