Recognising real smiles can improve pervasive healthcare

Preview
Authors
Xiaolin Li, Zakir Hossain, Jessica Sharmin Rahman

Background: Generally, smiles evoke positive feelings and provide preventive cure when patients are interacting with health professionals. People also can smile in awkward situations, when feeling nervous, or just want to show courtesy which can’t evoke positive feelings to the patients. Thus watching real smiles is more beneficial for healthcare. Recognising the real smiles using wearable technology (e.g. accelerometer signals) are found reliable as these are not possible to control voluntarily. Aims: This study aims to investigate observers’ accelerometer signals to recognise real smiles for improving pervasive healthcare.

Methods: Accelerometer signals were recorded from 25 observers as they were watching smiling images and videos considering paired (smiles from same person in both real and posed forms) and single conditions separately. Features were extracted from the signals and tested using two-sample Kolmogorov-Smirnov (K-S) test. Four machine learning classifiers (Random Forest, K-nearest Neighbour, Decision Tree and Logistic Regression) were implemented to compute the performance.

Results: Top 30 features were selected from extracted 55 features based on the highest absolute value of the difference between cumulative distribution of real and posed smiles. K-S test showed that the accelerometer signals can significantly distinguish between observed real and posed smiles (p < 0.01). Further, the highest accuracy of 80% was achieved using decision tree in classifying real and posed smiles.

Conclusions: The high accuracy from wearable technology highlights the potential use of this approach in pervasive and mental healthcare to support emotional communication between patients and carer.