We created a compact device for real-time voice analysis to detect anger, aiming to address domestic violence. Design challenges include limited processing power and storage capacity. To overcome this, we developed a smaller neural network based on VGG16 architecture and compressed it using TensorFlow Lite. Despite its reduced size, the network maintains 80% accuracy. Training data from the Emotional Speech Database was used, with features extracted using MFCC. The device, implemented on the ESP32-WROVER-E microcontroller, successfully integrates audio collection, MFCC calculation, neural network inference, and emotion visualization on an LED matrix within 676.41 milliseconds..
Engaged in the investigation of physiological signals and body movements for real-time detection of anxiety and stress, we are pioneering the development of an innovative signal data selection method specifically designed for microcontrollers handling small inputs of physiological data. Additionally, our focus extends to the creation of cutting-edge Deep Learning models optimized to effectively leverage computational resources.
We are currently delving into visual distraction within driving behavior, particularly within a virtual reality setting. Our focus entails devising a novel model that gauges mental effort by analyzing changes in pupil size in real-time, thereby enabling us to assess the challenge posed by distracting driving tasks.
We are highly motivated to leverage expertise in computer vision, deep learning, and face-eye data collection to contribute to the development of cutting-edge technologies in webcam-based eye-trackers.
This site was created with the Nicepage