Dictation systems, read-aloud software for the blind, speech control of machinery, geographical information systems with speech input and output, and educational software with `talking head' artificial tutorial agents are already on the market. The field is expanding rapidly, and new methods and applications emerge almost daily. But good sources of systematic information have not kept pace with the body of information needed for development and evaluation of these systems. Much of this information is widely scattered through speech and acoustic engineering, linguistics, phonetics, and experimental psychology.
The Handbook of Multimodal and Spoken Dialogue Systems presents current and developing best practice in resource creation for speech input/output software and hardware. This volume brings experts in these fields together to give detailed `how to' information and recommendations on planning spoken dialogue systems, designing and evaluating audiovisual and multimodal systems, and evaluating consumer off-the-shelf products.
In addition to standard terminology in the field, the following topics are covered in depth: How to collect high quality data for designing, training, and evaluating multimodal and speech dialogue systems;
How to evaluate real-life computer systems with speech input and output;
How to describe and model human-computer dialogue precisely and in depth.
Also included: The first systematic medium-scale compendium of terminology with definitions.
This handbook has been especially designed for the needs of development engineers, decision-makers, researchers, and advanced level students in the fields of speech technology, multimodal interfaces, multimedia, computational linguistics, and phonetics.