This project aims to find an alternative way to classify Lung Ultrasound Images of patients affected by COVID-19.
This repository is a student project for the Medical Imaging Diagnostic course of the Master's Degree in Artificial Intelligent Systems at the University of Trento, a.y. 2022-2023.
As explained in the paper Deep Learning for Classification and Localization of COVID-19 Markers in Point-of-Care Lung Ultrasound, images are scored as:
-
no artifact in the picture
-
at least one vertical artifact (B-line)
-
small consolidation below the pleural surface
-
wider hyperechogenic area below the pleural surface (> 50%)
We have been given a partial dataset from the San Matteo hospital, consisting of 11 patients for a total of ~47k frames.
The model I'm trying to build here is composed by three main parts:
-
a fine-tuned pre-trained model fitted on this problem;
-
a binary classifier that tries to predict, from the first model's behaviour, if it is confident enough; if
True
, the prediction is definitive, ifFalse
, the model proceeds to the next part; -
a similarity model to retrieve the similarity between the input frame and the training frames (probably t-SNE).
The idea is to take prediction in which the model is not confident enough and compare the frame to already known frames to (hopefully) enhance the accuracy.
project
│ README.md
│ Medical_Imaging_Diagnostic_Report.pdf: report on this project
│ model.ipynb: notebook containing the performance of the final model
│ step_by_step.ipynb: notebook containing all the test I made
│
└───data: contains all the csv files created from images data
│
└───dataset_utilities: python scripts to prepare the .png files
│
└───models: all my models
│
└───plots: plots from my project
│
└───images: folder that contains LUS images (it's not uploaded to GitHub)