Lifelog Enrichment

Multimodal Lifelog Enrichment using contextual sources and EEG. Developing an episodic memory capture and retrieval system by utilizing contextual sensors and portable EEG to identify important events in real-time and weight these events accordingly in a digital memory archive. This includes learning the identifiers of interesting events from contextual and EEG sources, developing associated event segmentation and importance weighing algorithms and developing subsequent retrieval tools for this archive.

Keywords: lifelog, EEG
Requirements: knowledge of java/python programming (or equivalent) and an interest in data analytics.
Main tasks:
  • Gather data using a portable EEG and a lifelogging camera
  • Time-align the data into a dataset
  • Apply a state-of-the-art event segmentation algorithm
  • Enrich and segment using EEG data
  • Begin a user trial

Enhanced Visual Analytics of Lifelog Data

Enhanced Visual Analytics of lifelog image streams. Applying state-of-the-art computer vision for semantic concept extraction and building individualised (and group) co-occurrance models to improve performance of the concept detection and to provide additional training for improving the state-of-the-art. A knowledge of machine learning would be useful here, but not essential.


Keywords: Colvolutional Neural Networks, lifelog, concept co-occurrance model
Requirements: knowledge of java programming (or equivalent).
Main tasks:
  • Apply off-the-shelf CNN Visual classifier to a lifelog archive to extract visual concepts
  • Build a linked co-occurrance graph of these concepts over time
  • Evaluate the improvement in Concept Detection by adding the co-occurrance model
  • Integrate into a retrieval system.

Sensing the Self

Ray Kurzweil describes a scenario in which the exponential progression of computing technology eventually (20 years away) computing intelligence is on a par with human intelligence. At the same time, lifelogging technologies have progressed to the point where we are now able to digitally capture a trace of life experience using wearable sensors (cameras, audio, location, movement, etc..). This project aims to explore the potential for gathering as detailed as possible a life trace using wearable sensors (smartphones, Google-glass type technologies, communication loggers) to build a detailed life trace for a person. The project involves developing a software service that gathers and logs data from all these sensors and makes them available for querying via an API (or interface). There would need to be back-end database work completed, as well as the software to read from the wearable sensors. If the student(s) have an interest, an interface component can be developed also.

Artistic Interfaces to Lifelogs

Wearable video cameras and wearable photo cameras are beginning to be used in recent years. Our wearable photo technology can capture up to 5,000 photos per day, all from the viewpoint of the user. The big challenge in this work is how to summarise 5,000 photos into beautiful and artistic summaries of a day, a week or a month. This project is to take an archive of many weeks of wearable camera photos and to cluster them automatically into logical photo clusters, based on the content of the photos. For example colour, or objects in the photo can be used to group/cluster related photos together. Once the clusters are created, the project main focus is to develop novel and unique artistic interfaces to the data that groups relevant data and allows for user click-based navigation. These interfaces should be developed in HTML 5 and should be interactive, allowing a user to navigate the data by browsing and following links. DCU can provide all the underlying information for the clustering process as well as providing the data.
Keywords: artistic, interface, interactive HTML 5, clustering

Main tasks and expected output:
  • Cluster photos
  • HTML 5 interface
  • Calculate linkages and store in a linkage graph
  • Artistic interface

Music of Life

People are gathering more and more digital data every year, eg. Photos, emails, blog entries, personal sensor data, documents, etc.. This project is to take our sensor data and turn it into music. Our sensor streams include location, people encountered, user activity (walking, standing, running ,driving, etc…), location, even photos (thousands per day). This project aims to identify repeating patterns in this sensor data and to map these patterns to musical sequences. By analyzing multiple sensor streams, we will make music (using software midi sequencing) and play it back on a computer.
Keywords: music, midi, sensor, life, pattern, compose
Requirements: knowledge of java programming (or equivalent) and an interest in music.
Main tasks:
  • Analyse the sensor streams to identify meaningful patterns
  • Weight these patterns and identify repeating patterns
  • Map these repeating patterns into MIDI note sequences, including choosing a good instrument for the different sensor streams.
  • Playback the midi sequence data and create a HTML page of the source data used to make the MIDI sequence