Skip to main content
Article
OmniSense: A Collaborative Sensing Framework for User Context Recognition Using Mobile Phones
HotMobile '10, Annapolis, M.D.
  • Heng-Tze Cheng, Carnegie Mellon University
  • Senaka Buthpitiya, Carnegie Mellon University
  • Feng-Tso Sun, Carnegie Mellon University
  • Martin L Griss, Carnegie Mellon University
Date of Original Version
1-1-2010
Type
Article
Abstract or Description

Context information, including a user’s locations and activities, is indispensable for context-aware applications such as targeted advertising and disaster response. Inferring user context from sensor data is intrinsically challenging due to the semantic gap between low-level signals and high-level human activities. When implemented on mobile phones, more challenges on resource limitations are present. While most existing work focuses on context recognition using a single mobile phone, collaboration among multiple phones has received little attention, and the recognition accuracy is susceptible to phone position and ambient changes. Simply putting a phone in one’s pocket can render the microphone muffled and the camera useless. Furthermore, naïve statistical learning methods used in prior work are insufficient to model the relationship between locations and activities.

Citation Information
Heng-Tze Cheng, Senaka Buthpitiya, Feng-Tso Sun and Martin L Griss. "OmniSense: A Collaborative Sensing Framework for User Context Recognition Using Mobile Phones" HotMobile '10, Annapolis, M.D. (2010)
Available at: http://works.bepress.com/martin_griss/6/