...users on-the-go can simply “think” their way through all of their mobile applications.
NeuroPhone is a system which essentially allows people to dial contacts by thinking about them. Campbell, et al. use the Emotiv EPOC to read EEG signals and pass them to an iPhone. The iPhone then runs a lightweight classifier that can distinguish the desired signals from noise. More specifically, the authors use P300 signals (A positive peak associated with a desired individual image within a set) and physical winks. The process is as follows:
- A set of contact photos appear on the screen
- Each photo is highlighted in turn
- The user concentrates on the photo for the contact they wish to dial
- When said photo is highlighted, a P300 signal is generated (or the user winks)
- The iPhone gets the positive acknowledgement from the wearer and dials that contact
The contact selection process. Graciously taken from the authors' paper.
The authors conducted an initial user study for both the 'wink' and 'think' selection modes with 3 subjects. They found that their classifier for winks worked best on relaxed, seated subjects. Actions that led to muscle contraction and distracted users (music was used in their test) led to significantly lower accuracy measures. The authors also showed that accuracy increases as the data accumulation time increases.
A video overview of NeuroPhone is also available.
When reading the evaluation I couldn't help but wonder if the users liked the application itself. Besides its unquestionable novelty, does it serve a function? Neither I nor the authors claim that NeuroPhone was designed to solve the specific problem of calling someone, but I believe the paper could have done a better job stressing the function of NeuroPhone in the greater realm of mobile applications. This project serves as an excellent example of how signal processing and BCI research can be meshed into HCI and mobile and ubiquitous computing. I have read previous papers on P300-spellers (typing letters by selecting them from a visible grid) and was glad to see the concept extended in a way that now seems completely obvious. I also enjoy the fact that they used the Emotiv EPOC because it is obviously my chosen headset for my fledgling research. The authors did a great job of explaining the benefits of using cheap(er) headsets like the EPOC and framing their use within the context of mobile computing. Overall the focus of the paper seemed to be torn between the success of the mobile classifier and the contact dialer itself. Both points came across, but I read sections out of order to better follow each thread of their contribution. Great stuff.
Beyond the project itself, I really connected with section 2 of the paper. The future outlook posed by the authors literally made me stop and consider the implications of BCI research. How long until we have to worry about people intercepting our 'thoughts' or emotional maps, or forging them in order to interface with technologies? I feel like Tom Clancy wrote something about this already... Regardless, the authors pose an excellent point. Emotion-driven interfaces are on their way, and we have much to consider.
Andrew Campbell, Tanzeem Choudhury, Shaohan Hu, Hong Lu, Matthew K. Mukerjee, Mashfiqui Rabbi, and Rajeev D.S. Raizada. 2010. NeuroPhone: brain-mobile phone interface using a wireless EEG headset. In Proceedings of the second ACM SIGCOMM workshop on Networking, systems, and applications on mobile handhelds (MobiHeld '10). ACM, New York, NY, USA, 3-8. DOI=10.1145/1851322.1851326 http://doi.acm.org/10.1145/1851322.1851326