Repository logo
 

Research Publications (Accounting and Informatics)

Permanent URI for this collectionhttp://ir-dev.dut.ac.za/handle/10321/212

Browse

Search Results

Now showing 1 - 1 of 1
  • Thumbnail Image
    Item
    Computer vision: the effectiveness of deep learning for emotion detection in marketing campaigns
    (The Science and Information Organization, 2022-05) Naidoo, Shaldon Wade; Naicker, Nalindren; Patel, Sulaiman Saleem; Govender, Prinavin
    —As businesses move towards more customer-centric business models, marketing functions are becoming increasingly interested in gathering natural, unbiased feedback from customers. This has led to increased interest in computer vision studies into emotion recognition from facial features, for application in marketing contexts. This research study was conducted using the publicly-available Facial Emotion Recognition 2013 data-set, published on Kaggle. This article provides a comparative study of four deep learning algorithms for computer vision application in emotion recognition, namely, Convolution Neural Network (CNN), Multilayer Perceptron (MLP), Recurring Neural Network (RNN), Generative Adversarial Networks (GAN) and Long Short-Term Memory (LSTM) models. Comparisons between these models were done quantitatively using the metrics of accuracy, precision, recall and f1-score; as well and qualitatively by determining goodness-of-fit and learning rate from accuracy and loss curves. The results of the study show that the CNN, GAN and MLP models surpassed the data, and the LSTM model failed to learn at all. Only the RNN adequately learnt from the data. The RNN was found to exhibit a low learning rate, and the computational intensiveness of training the model resulted in a premature termination of the training process. However, the model still achieved a test accuracy of up to 72%, the highest of all models studied, and it is possible that this could be increased through further training. The RNN also had the best F1-score (0.70), precision (0.73) and recall (0.73) of all models studied