References of "Oyedotun, Oyebade 50025901"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailFacial Expression Recognition via Joint Deep Learning of RGB-Depth Map Latent Representations
Oyedotun, Oyebade UL; Demisse, Girum UL; Shabayek, Abd El Rahman UL et al

in 2017 IEEE International Conference on Computer Vision Workshop (ICCVW) (2017, August 21)

Humans use facial expressions successfully for conveying their emotional states. However, replicating such success in the human-computer interaction domain is an active research problem. In this paper, we ... [more ▼]

Humans use facial expressions successfully for conveying their emotional states. However, replicating such success in the human-computer interaction domain is an active research problem. In this paper, we propose deep convolutional neural network (DCNN) for joint learning of robust facial expression features from fused RGB and depth map latent representations. We posit that learning jointly from both modalities result in a more robust classifier for facial expression recognition (FER) as opposed to learning from either of the modalities independently. Particularly, we construct a learning pipeline that allows us to learn several hierarchical levels of feature representations and then perform the fusion of RGB and depth map latent representations for joint learning of facial expressions. Our experimental results on the BU-3DFE dataset validate the proposed fusion approach, as a model learned from the joint modalities outperforms models learned from either of the modalities. [less ▲]

Detailed reference viewed: 115 (30 UL)
Full Text
Peer Reviewed
See detailPrototype Incorporated Emotional Neural Network (PI-EmNN)
Oyedotun, Oyebade UL; Khashman, Adnan

in IEEE Transactions on Neural Networks and Learning Systems (2017)

Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ‘engineering’ prospects in ANN have relied on motivations from cognition and psychology studies. So ... [more ▼]

Artificial neural networks (ANNs) aim to simulate the biological neural activities. Interestingly, many ‘engineering’ prospects in ANN have relied on motivations from cognition and psychology studies. So far, two important learning theories that have been subject of active research are the prototype and adaptive learning theories. The learning rules employed for ANNs can be related to adaptive learning theory, where several examples of the different classes in a task are supplied to the network for adjusting internal parameters. Conversely, prototype learning theory uses prototypes (representative examples); usually, one prototype per class of the different classes contained in the task. These prototypes are supplied for systematic matching with new examples so that class association can be achieved. In this paper, we propose and implement a novel neural network algorithm based on modifying the emotional neural network (EmNN) model to unify the prototype and adaptive learning theories. We refer to our new model as “PI-EmNN” (Prototype-Incorporated Emotional Neural Network). Furthermore, we apply the proposed model to two real-life challenging tasks, namely; static hand gesture recognition and face recognition, and compare the result to those obtained using the popular back propagation neural network (BPNN), emotional back propagation neural network (EmNN), deep networks and an exemplar classification model, k-nearest neighbor (k-NN). [less ▲]

Detailed reference viewed: 28 (10 UL)
Full Text
Peer Reviewed
See detailTraining Very Deep Networks via Residual Learning with Stochastic Input Shortcut Connections
Oyedotun, Oyebade UL; Shabayek, Abd El Rahman UL; Aouada, Djamila UL et al

in 24th International Conference on Neural Information Processing, Guangzhou, China, November 14–18, 2017 (2017, July 31)

Many works have posited the benefit of depth in deep networks. However, one of the problems encountered in the training of very deep networks is feature reuse; that is, features are ’diluted’ as they are ... [more ▼]

Many works have posited the benefit of depth in deep networks. However, one of the problems encountered in the training of very deep networks is feature reuse; that is, features are ’diluted’ as they are forward propagated through the model. Hence, later network layers receive less informative signals about the input data, consequently making training less effective. In this work, we address the problem of feature reuse by taking inspiration from an earlier work which employed residual learning for alleviating the problem of feature reuse. We propose a modification of residual learning for training very deep networks to realize improved generalization performance; for this, we allow stochastic shortcut connections of identity mappings from the input to hidden layers.We perform extensive experiments using the USPS and MNIST datasets. On the USPS dataset, we achieve an error rate of 2.69% without employing any form of data augmentation (or manipulation). On the MNIST dataset, we reach a comparable state-of-the-art error rate of 0.52%. Particularly, these results are achieved without employing any explicit regularization technique. [less ▲]

Detailed reference viewed: 96 (33 UL)