My research interests focus primarily on information processing issues in Human-Computer Interaction. Most of this work has concentrated on the enhancement and integration of existing techniques to create novel technologies.
Most of my research can be classifed under the following headings:
Emerging technologies mean that items in media such as text, speech, or image, can be automatically indexed, albeit often with some degree of error. Once indexed documents can be retrieved using various information retrieval strategies. However retrieval often only represents part of the problem in accessing the information actually held in data items. For example, playing spoken documents in their entirety is inefficient and time consuming, or the information may be in a languiage with which the user is unfamiliar. I am interested in how information can be made available to users to improve the overall efficiency of information access.Information Retrieval
My work in Information Retrieval has focused on a number of topics relevant to the overall theme of Information Access.
Growing archives of spoken material are creating a significant problem in data access. If you want to find a document within an archive, since you don't know what's in it, how do you look for it? The Video Mail Retrieval Using Voice (VMR) project conducted at the University of Cambridge) sought to address this problem by indexing the spoken material using speech recognition and then applying probabilistic information retrieval techniques, developed from experience with text, to retrieve documents of interest.
Various approaches to indexing were investigated including word spotting based on sub-word phone lattices and standard large vocabulary speech recognition. The project successfully demonstrated a system for the retrieval of personal video/voice mail and evaluated its retrieval performance in office conditions to be around 80% of that achieved using text transcriptions.
We also built a demonstration system for the online retrieval of broadcast television news programmes. This uses content indexing from subtitles, and demonstrates retrieval and real-time playback from an online archive of around 20 hours of digitally stored video material.
More recently, in collaboration with City University, London and Cambridge University, I participated in Spoken Document Retrieval (SDR) track of the 6th US NIST Text Retrieval Conference (TREC 6).
My current work in the area of spoken document retrieval is focussed on extending the techniques developed in the Video Mail Retrieval using Voice (VMR) project. In particular I am interested in the application of relevance feedback techniques, improved acoustic indexing methods and greater integration of the acoustic indexing and information retrieval methods than has so far been attempted.
Information retrieval techniques for the English language have been researched extensively in recent years. However much less work has been carried out on Asian language retrieval. My work on Japanese language retrieval, carried out while working as a Visiting Fellow at the Toshiba Corporation Research and Development Center in Japan, focused initially on the examining the application of retrieval techniques developed for English language retrieval to Japanese languagew documents and search requests. This work was then extended to explore the application of relevance feedback techniques to Japanese language retrieval. Results from all these experiments were very positive indicating that information retrieval techniques, in particular term weighting, can be applied successfully to different languages. Although for best performance some specialised tuning to the individual language is generally required. For example, Japanese text has no spaces between words and a technique must be used to segment it into appropriate indexing units, e.g. words or charactr n-grames.
The development of online information repositories, due principally to the current expansion of the World Wide Web, is creating many opportunities and also problems in information retrieval. Online documents are available internationally in many different languages. This enables users to access directly hitherto unimagined sources of information. However in conventional information retrieval systems the user must enter a search query in the language of the documents in order to retrieve it. This presupposes that the user is able to write in the language of all possible relevant documents. This restriction clearly limits the amount of information to which an individual user will have access. Cross-language information retrieval enables a user to enter a query in a language in which they are fluent, and uses language translation methods to retrieve documents originally written in other languages.
My research in this area has examined two subject areas. First, I am worked on one of the most challenging query translation tasks, translating English language queries to retrieve Japanese language texts. This work compared a number of query translation techniques based on machine translation and looking up word translations in an online English-Japanese dictionary. Second, I am performed experiments in cross-language retrieval of spoken documents using French language queries to retrieve English language documents.
Cross-Language Information Access is an extension of the Cross-Language Information Retrieval paradigm. Users unfamiliar with the language of documents returned using Cross-Language Information Retrieval are often unable to extract relevant information from these documents. The objective of Cross-Language Information Access is to introduce addition post-retrieval processing to enable users to make use of these retrieved documents. This additional processing may take the form of applying techniques such as machine translation, summarisation or information extraction.
My work in speech recognition has concentrated on the investigation of language models for speech recognition and understanding as described in the following section.
Also I am interested the development of suitable speech recognition techniques for content indexing for Spoken Document Retrieval. This has involved work in large voabulary recognition, keyword spotting and indexing using a system for phone lattic scanning.
My most recent work in this area has focused on retrieval tasks with high out-of-voabuluary rates and very short documents and queries. For these tasks recognition of individual index terms is vital and the high out-of-vocabulary levels mean that large vocabulary recognition may not be appropriate for this task.Natural Language Modelling
My work on natural language modelling has been focused on spoken language applications. For spoken language systems the language model is often chosen to fit the task. Thus for simple systems (e.g. keyword spotters) a parallel network of words in the chosen recognition vocabulary may be used, whereas for large vocabulary speech recognition (e.g. transcription systems) a more complex language model is required.
Conventional speech recognition systems have relied on a Markov model trained on a large corpus of text, frequently referred to as an n-gram. These language models take no account of linguistic structure, but are found to be effective for indexing and transcription applications. However they are not generally suitable for speech understanding systems which require that some linguistic interpretation is made of the speech signal.
My research in this area has focused on a comparison between traditional n-gram language models and linguistically motivated language models based on probabilistic context-free grammars. This work has investigated the relative merits of the two approaches and proposed a new consolidated language model which combines the rule-based and statistical approaches. Much of this work has centred on experimental investigation into the practical integration of rule-based language models in speech recognition systems.
I am also interested in statistical computer assisted grammar construction from annotated corpora.Information Visualisation
The vast amounts of information available from online sources mean that users are often unable to quickly access material that is of interest to them. Even within individual text documents it can take considerable time to locate the relevant sections. Information visualisation is aimed at improving the efficiency of information access both within individual documents and between retrieved items. This is achieved using methods such as graphical content-representation and content sumarisation.
Retrieving multimedia documents such as spoken data is only part of the solution. The temporal nature of spoken material means that it is often extremely slow in access the document contents to assess itsd relevance and extract relevant information need to satisfy the users information need. An efficient method of information access for audio documents is the graphical audio browser developed within the Video Mail Retrieval project. The contents of a document are displayed graphically and the user is able to begin playback at any point in the document by pointing and clicking. In retrieval applications the graphical display shows the results of search query matching on the document, enabling the user to locate potentially relevant sections very quickly.Mobile Computing
Developments in wireless mobile computing devices, e.g. PDAs or WAP enabled mobile phones, enables users to access information remotely from any location. The focus of this research is to enhance information provision by extending information retrieval methods to context-aware environments. This will enable users to be alerted to information that may be important to them, or provide them with content relevant to their current context, e.g. their current location.
Convention retrieval algorithms are not context-aware the focus of this research is the exploration of the relationship between conventional information retrieval and information filtering, and context-aware environments.
A virtual world immerses the observer in a physically remote environment. Rapid technological developments are introducing many new possibilities for interaction in virtual worlds. My interests in this area are currently in the areas of: medical training tools and, navigation through and interaction with virtual environments.Virtual Worlds for Surgical Simulation and Training
The Exeter Virtual Worlds Project is developing applications for medical training. The main project is the Exeter Virtual Worlds (EVW) system for shoulder arthroscopy training. The EVW system uses real-time playback of photograph images from exhaustive exploration of joints with an anthroscope to enable trainee surgeons to develop examination skills from real images before examining real patients.
Current work on the EVW project is focussed on developing prototype systems for trainee surgeons, and obtaining feedback from them to enable the system to be further developed.Interaction in Virtual Worlds
Traditional advertising is limited by limitation of space and natural physical constraints. Advertising in virtual environments whether in web pages or immersive virtual environments has no such restrictions. My work in this area is exploring new concepts in advertising such as user personalisation and animation of advertisements.
Affective Computing is a newly emerging field that has been defined as "computing that relates to, arises from, or deliberately influences emotions." There are many challenging open research problems in this area. Current research is only just beginning to tackle the most basic issues related to emotion detection and synthesis.
But what have emotions to do with computers?
Well, ecent neurological evidence suggests that emotions are an essential component of human reasoning, even in apparently completely rational decision making. Furthermore, emotional expression is a natural and significant part of human interaction. Traditional human-computer action studies take no account of any emotional element in the interaction process. Your computer doesn't know if you are frustrated, interested, tired, rushed, or bored. If computer interfaces are to be made truly intelligent it can be argued that they must include emotional processing. Thus your computer would be able to respond to you in a more pleasing manner by learning about you and your emotional states.Affective Communication in Virtual Environments
Interaction using electronic communications such as email and chat environments is becoming increasingly popular. The information passed between participants in these text based environments is limited to the factual material entered by the user. Absent from these interactions is any sense of the affective states of the participants, useless they explictly state their emotions during the interaction. This is very different from physical discussions where the participants are in the same place, and are able to observe facial and gestures movements and hear changes in each other voice associated with affective state.
The aim of this project is to explore ways in which affective information could be added to the transfer of information in text based electronic communications. Some examples might include use of font variation, speed of delivery or automatic addition of simple text based graphics.Affective States in Decision Making
It has been observed that patients with various brain injuries experience problems making apparently simple decisions. The patients lose the ability tp care about things that previously mattered to them, and the patients become kess creative and decisive, and less able to make strategic decisions. This can render them totally unable to manage their own lives, even though they retain normal functions of perception, memory, language, perform well on intelligence tests and even know how they ought to behave in various circumstamces.
The aim of the project is to explore the decision making and strategic thinking of subjects without brain damage, but who experiencing hightened affective states. If patients who can longer care make poor decisions, perhaps those who care too much about something experience similar difficulties.