Editorial

International Journal of Pervasive Computing and Communications

ISSN: 1742-7371

Article publication date: 30 August 2013

55

Citation

Khalil, I. (2013), "Editorial", International Journal of Pervasive Computing and Communications, Vol. 9 No. 3. https://doi.org/10.1108/ijpcc.2013.36109caa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2013, Emerald Group Publishing Limited


Editorial

Article Type: Editorial From: International Journal of Pervasive Computing and Communications, Volume 9, Issue 3

The 14th International Conference on Information Integration and Web-based Applications and Services (iiWAS) and the 10th International Conference on Advances in Mobile Computing and Multimedia were held in December 2012 in Bali, Indonesia. The conferences attracted papers from academics and researchers from all over the world. The event was a great success and attracted the highest number of participants in the history of iiWAS/MoMM. From amongst the accepted papers, we have invited a few authors to submit their extended versions for this special issue in the International Journal of Pervasive Computing and Communication (IJPCC). Submitted papers were further reviewed by at least two reviewers and, based on the reviews, four papers were accepted in this special issue. In addition to the four papers, we also accepted one additional regular paper in this issue.

The first paper in this issue “Towards pan shot face unlock: using biometric face information from different perspectives to unlock mobile devices” by Rainhard Dieter Findling and Rene Mayrhofer proposes and evaluates a pan shot face unlock method: a mobile device unlock mechanism using all information available from a 180° pan shot of the device around the user’s head – utilizing biometric face information as well as sensor data of built-in sensor of the device. The approach uses grayscale 2D face images, on which frontal and profile face detection is performed. For the face recognition, different support vector machines and neural networks were evaluated. In order to evaluate the pan shot face unlock tool chain, authors assembled the 2013 Hagenberg stereo vision pan shot face database, which is described in detail in this article. The results indicate that the approach to face recognition (with recognition rates above 90 percent for the smaller class with non-uniform class sizes, using non-erroneous data only) is sufficient for further usage in this research. However, face detection as a prerequisite to face recognition is still error prone for the mobile use case with changing background and illumination conditions, and the detection rates decreases down to 60 percent for specific perspectives – which consequently decreases the face recognition performance as well. This further indicates that additional research towards more robust approaches to extracting faces from images recorded from different perspectives around the user’s head will be necessary in order to obtain a robust and reliable pan shot face unlock of mobile devices.

In the second paper “Creative and innovative e-learning using interactive storytelling” by Asmaa Alsumait and Zahraa Al-Musawi, a storytelling tool called Child Interactive Storytelling (CIS) was developed to help kindergarteners create stories. This tool included an instrument used to measure four characteristics of four- five-year-old children: general knowledge, creativity, self-confidence and between the children and the technology to assess child’s progress and to better understand and improve upon this educational innovation. This interactive storytelling tool helped instructors as well as parents to perceive the child’s progress through multiple use of the tool. Experiments indicated that teachers evaluations of their children with those measured from the developed tool were aligned which indicates that the interactive storytelling tool is valid. Interactive storytelling is a powerful tool for improving children’s development of essential skills and general knowledge. As an informal learning method, interactive storytelling provides life experience and promotes the use of vocabulary and communication skills.

The third paper in this issue “A system for visualizing sound source using augmented reality” by Ruiwei Shen, Tsutomu Terada and Masahiko Tsukamoto proposes a new interface of sound source recognition for the hearing-impaired using augmented reality (AR) that can show the user information in the real world by using a web camera and head-mounted display (HMD). The system can recognize the environmental sound in real time and inform the user of the type of sound by showing a virtual object in the user’s sight. Furthermore, the user can find the direction of the sound source by using a microphone array and locate the sound source through the AR marker attached to the object. This article extends previous research on this subject which is focused on the algorithm of recognition and showing the results to users in text or pictures which is not an ideal interface for the hearing-impaired for showing them the direction of the sound source.

The fourth paper “Fine-granularity semantic video annotation: an approach based on automatic shot level concept detection and object recognition” by Vanessa El-Khoury, Martin Jergler, Getnet Abebe Bayou, David Coquil and Harold Kosch addresses the requirements of video content indexing, retrieval, and adaptation by proposing a Semantic Video Content Annotation Tool (SVCAT) for structural and high-level semantic annotation. SVCAT is a semi-automatic MPEG-7 standard compliant annotation tool, which produces metadata according to a new object-based video content model introduced in this paper. Videos are temporally segmented into shots and shots level concepts are detected automatically using ImageNet as background knowledge. These concepts are used as a guide to easily locate and select objects of interest which are then tracked automatically to generate an object level metadata. The integration of shot based concept detection with object localization and tracking drastically alleviates the task of an annotator. As such, SVCAT enables to easily generate selective and fine-grained metadata which are, for instance, vital for user centric object level semantic video operations such as product placement or obscene material removal. Experimental results showed that SVCAT is able to provide accurate object level video metadata.

The last paper “Mobile-based interpreter of arterial blood gases using knowledge-based expert system” by Majid Al-Taee, Ayman Z. Zayed, Suhail N. Abood, Mohammad A. Al-Ani, Ahmad M. Al-Taee and Hussein A. Hassani proposes a mobile-based interpreter for Arterial Blood Gas (ABG) tests with the aim of providing accurate diagnosis in face of multiple acid-base and oxygenation disorders. ABG interpretation remains indispensable tool to assess and monitor critically ill patients in the ICU or other critical care settings. A rule-based expert system is designed and implemented using interpretation knowledge gathered from specialist physicians and peer-reviewed medical literature. The gathered knowledge of ABG tests are organized into premise-explanation pairs to deliver reliable evaluation with the appropriate differential in a timely manner. The performance of the developed interpreter prototype was assessed using a dataset of 74 ABG tests gathered from medical literature and clinical practice. The obtained results demonstrated that the identified acid-base and oxygenation disorders, and their differential diagnoses are accurately correlated with those assessed manually by consultant specialist physicians.

July 2013

Ismail KhalilEditor-in-Chief

Related articles