Sensor Web Technologies Group
How we learn, work, and play has been forever transformed by the always-on connectivity of the Internet. Given the flood of information available, the automatic analysis of media in all its forms (including text, image, video, environmental, VR, and 3D) is centre stage in computing. This is because content-based applications such as retrieval, recognition, summarisation and recommendation all depend upon good quality analysis of content.
The sensor web technologies group focuses on multi-modal analysis and interaction tools to extract and leverage useful information from multimedia data. We target traditional forms of media, such as text, video and environmental data. We also focus on real-time web sources, social media data, wearable devices, data from physiological sensors like EEG, HR, GSR, IOT, and fixed infrastructure in the physical world. Our research seeks to bridge the boundary between the digital and physical worlds, integrate new forms of media, provide fast reaction services to media events, and gather and curate large valuable archives of media data. This in turn supports gathering, organising, indexing, search, linkage and presentation. Our research efforts focus on multimodal media analysis, human interaction analytics, real-world lifelogging and navigating digital archives.
Example Research Projects
>>>Insert example research projects here. <<<