A cutting-edge 2D face recognition system based on deep learning
Explore the docs Β»
Usage
Β·
Report Bug
Β·
Request Feature
Table of Contents
Developed for the Biometric Course IT4432E (Semester 20241) at HUST, this project implements a 2D face recognition system using advanced algorithms for accurate verification. Specifically, we explore our topic by two approaches:
-
Using a pre-trained Model (FaceNet) for features extraction, then use those features to train a Support Vector Machine model for classification
-
Training a Siamese network architecture + L1 Distance layer from scratch
These approaches combine the convenience and accuracy of pre-trained models with the educational value of training custom architectures from scratch. After training, we obtain models and use them to build applications with GUI using PyQT6 and Kivy framework.
The project is built with the following development tools, technologies:
- Camare detection window:
Scan all available cameras and select the one you want to use for face recognition.
- Enrollment tab:
Enter your name and take a series of photos to enroll in the system.
- Verification tab:
Enter your name and take a photo to verify your identity.
Due to the privacy, we do not show our team camera in the demo. You can download the application and try by yourself.
For more detail usage and explaination, please refer to the Documentation
Tip
If you only want to see the application demo, download it and follow the installation instructions on the release page and you can stop reading here.
For those interested in our project's details, including structure, code, training process, evaluation, results, etc, you can join us in the following sections.
Just want to see the model training process? Check out our Kaggle notebooks:
Want to explore the full project including data preprocessing, training, and application? Follow our instruction below.
- Clone the repo
git clone https://github.com/chutrunganh/Biometric_IT4432E.git
- Install dependencies
Navigate to the project folder:
cd REPLACE_WITH_YOUR_PATH/Biometric_IT4432E
- With Linux
# Activate python virtual environment
python3 -m venv venv
source venv/bin/activate
# Install pip tool if you haven't already
sudo pacman -Syu base-devel python-pip # With Arch-based
# sudo apt update && sudo apt upgrade -y && sudo apt install build-essential python3-pip # With Debian-based, use this command instead
pip install --upgrade pip setuptools wheel
# Install all required dependencies
pip install -r requirements_for_Linux.txt
- With Windows
python -m venv venv
.\venv\Scripts\activate.bat # If execute in CMD
# .\venv\Scripts\activate.ps1 # If execute in PowerShell
# Install pip tool if you haven't already
python -m ensurepip --upgrade
pip install --upgrade pip setuptools wheel
# Install all required dependencies
pip install -r requirements_for_Windosn.txt
# Install ipykernel in your virtual environment
pip install ipykernel
python -m ipykernel install --user --name=venv --display-name "Python (venv)" # Create a new kernel for Jupyter
Choose the kernel named venv
when running Jupyter Notebook.
It may take about 15-30 minutes to download all dependencies, depending on your internet speed.
Important
This project requires Python 3.12.x. Some other versions, such as 3.10.x, have been reported to have compatibility issues with dependencies.
- Follow the code files
Follow the code files from 1 to 4 (you can choose to just follow Pipeline1 or Pipeline2) and read the instructions, run the code inside these files to generate and process data. Note that this is a pipeline, so do not skip any files; otherwise, errors will occur due to missing files.
Like this project? Give a star π to VerifyMe and make it even stronger! πͺ
Here are main components of the project with their respective functionalities:
Biometric_IT4432E
|
|ββ Slide_And_Report
|
|ββ requirements_for_Linux/Windows.txt -> contains all required dependencies to run on local
|
|ββ data -> contains images for training models
|
|ββ model_saved -> store models after training
|
|ββ preprocessing_data -> contains preprocessed data for Pipeline1
| |ββ faces.npz -> contains compress faces data
| βββ embeddings.npz -> contains compress embeddings data
|
|ββ preprocessing_data(for_Siamese) -> contains preprocessed data for Pipeline2
| βββ faces.npz -> contains compress faces data
|
|ββ application_data
| |ββvalidation_images -> contains images from enrollment process
| | |ββuser1
| | |ββuser2
| | βββ...
| |
| βββ settings.json -> store deteced camera index
|
|ββ 1.DataCollection.ipynb -> Collect data for training
|
|ββ 2.Pipeline1 DataPreprocessing.ipynb -> Preprocess data,face detector (using MTCNN),extract feature (using FaceNet)
|
|ββ 2.Pipeline2 DataPreprocessing.ipynb -> Preprocess data for Siamese Network
|
|ββ 3.Pipeline1 SVM_Classifier.ipynb -> Train SVM model
|
|ββ 3.Pipeline2 Siamese_Network.ipynb -> Train Siamese architecture network
|
|ββ 4.Pipeline1 Application_FaceNet_SVM_CLI.ipynb
|
|ββ 4.Pipeline1 Application_FaceNet_SVM_GUI.py
|
|ββ 4.Pipeline2 Application_Siamese_Network_CLI.ipynb
|
βββ 4.Pipeline2 Application_Siamese_Network_GUI.py
And here is the workflow of the project:
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
For the success of this project, I want a special thanks to:
-
Project supervisor: Dr. Tran Nguyen Ngoc, Dr. Ngo Thanh Trung
-
Team members:
Name Student ID Chu Trung Anh (team leader) 20225564 Bui Duy Anh 20225563 Pham Minh Tien 20225555
Distributed under the Apache-2.0 License License. See LICENSE
for more information.
This project is maintained by: Chu Trung Anh - Email.
Feel free to contact me if you have any question or suggestion.