Masked Face Recognition Using Bag of CNN: Robust Local Feature Extraction and Region of Interest
DOI:
https://doi.org/10.3991/ijim.v18i14.47459Keywords:
Face recognition, Bag of CNN (BoCNN), Convolutional Neural Networks (CNNs), Labeled Faces in the Wild (LFW) dataset, Feature extraction, Accuracy, Security systemsAbstract
Face recognition remains a crucial issue in computer vision with various applications. This paper introduces an adapted method to tackle the challenges posed by mask-wearing during the COVID-19 pandemic. We propose modifications to the bag of convolutional neural networks (BoCNN) framework, which combines CNNs and the bag of words (BoW) approach. Our main contribution is customizing the BoCNN algorithm to identify faces with masks by focusing on the visible facial regions, particularly the eyes and eyebrows. Facial landmarks are detected, and the region of interest (ROI) is extracted using techniques such as Media Pipe. A pre-trained CNN is then applied to sections within the ROI, enabling robust feature extraction that captures intricate details such as lighting variations and facial expressions while reducing the impact of mask occlusions. The extracted features are pooled to create a comprehensive representation for recognition. Extensive experiments on the labeled faces in the wild (LFW) dataset, including masked face images, demonstrate the superiority of our adapted BoCNN approach over traditional BoW and deep feature extraction methods, especially accurately recognizing masked faces. Additionally, we assess the generalizability of our method across multiple datasets and discuss potential limitations and future research directions. The proposed BoCNN-based solution proves effective in recognizing masked faces, making it highly relevant for applications in security, human-computer interaction, and various other domains affected by the COVID-19 pandemic.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Omar Muhi, Mariem Farhat, Mondher Frikha
This work is licensed under a Creative Commons Attribution 4.0 International License.