Monday, April 25, 2011

Semantic-PlaceBrowser: Understanding Place for Place-Scale Context-Aware Computing

I presented this paper at the Pervasive 2010 Workshop: Ubicomp in the Large: Collaborative Sensing and Collective Phenomena (http://www.socionical.eu/index.php?option=com_content&view=article&id=85&Itemid=91)

However, the proceeding is not published anywhere and many of my colleagues asking me about the paper. I would use my blog to post and introduce a little bit about it.

The aim of the paper is to present an idea that pervasive mobile devices can browser the semantic meaning of sensed items in the physical world. Actually, browsing the physical world have interested many researchers:
Castelli et al [11] with "Browsing the world", Nakamura et al [12] with "Ambient Browser" and Bainbridge [13] "A map-based place-browser" have similar approach. In [11], authors use RFID reader to sense the environment for RFID tags. In [12], is an interesting approach, which not actually sense the world but because the similar title that persuades me to review. The authors implement a browser located in the kitchen, users with RFID tagged in their hand (gloves, bracelet) when user moving their hands toward the computer, the RFID reader in the computer detects tags and automatically moves to the new web links.

Our Semantic PlaceBrowser has similar to the PlaceSense approach in the sensing part, but different from the understanding and browsing for semantic meaning. For illustration, we introduce the sensing for Bluetooth phones. Because Bluetooth phones are pervasive and easy to approach. We have discussed the bluetooth sensing in [10].

Actually, the Semantic PlaceBrowser utilizes the PlaceComm framework architecture (See figure below). The browser focuses on the discovering meaning in the KB from the mobile device.


The different point of the Semantic PlaceBrowser is every tag (bluetooth device) is stored in a knowledge base. It has semantic links to other objects, entities or people such as hasOwner(device, person), ownDevice(person, device).. and so on. Therefore, whenever is detected, we not only know about its presence but also know more about its owner, for example, this device is belong to my friend, so if the Semantic PlaceBrowser found it, it means that my friend is around here.
The browsing is implemented by SPARQL queries send from an agent from user's mobile device. The knowledge base agent from the server side will recieve the SPARQL (in raw text format) then parsing it, then it actually queries the Knowledge base to get an answer.


That is it folks. Thanks for reading.

Link to download the full article:
https://sites.google.com/site/tuannguyenlatrobe/publications/SemanticPlaceBrowserCameraDue.pdf?attredirects=0&d=1

This is the demonstration on how it works: Part 1 Preparation




Part 2: Semantic PlaceBrowser


To get the source code, scripts and Netbeans project, to see how it works. Please visit this address:
https://sites.google.com/site/tuannguyenlatrobe/research/running-semantic-placebrowser

No comments:

Post a Comment