Audio-Visual Emotion Recognition Using Big Data Towards 5G
With the advent of future generation mobile communication technologies (5G), there is the potential to allow mobile users to have access to big data processing over different clouds and networks. The increasing numbers of mobile users come with additional expectations for personalized services (e.g., social networking, smart home, health monitoring) at any time, from anywhere, and through any means of connectivity. Because of the expected massive amount of complex data generated by such services and networks from heterogeneous multiple sources, an infrastructure is required to recognize a user’s sentiments (e.g., emotion) and behavioral patterns to provide a high quality mobile user experience. To this end, this paper proposes an infrastructure that combines the potential of emotion-aware big data and cloud technology towards 5G. With this proposed infrastructure, a bimodal system of big data emotion recognition is proposed, where the modalities consist of speech and face video. Experimental results show that the proposed approach achieves 83.10 % emotion recognition accuracy using bimodal inputs. To show the suitability and validity of the proposed approach, Hadoop-based distributed processing is used to speed up the processing for heterogeneous mobile clients.
The Internet of Medical Things (IoMT) is an essential paradigm for ubiquitous monitoring in healthcare environments. The IoMT system collects data (e.g.
One of the serious security breaches that threatens today's smart technologies is the broadcasting of false alarms. These alarms may severely affect road management systems. Vehicular networks…
Although ElectroEncephaloGram (EEG) signals allow subjects suffering from neuromuscular disorders to interface their brains with the cyber-physical world, occupational therapy can be enhanced with…