An open API service indexing awesome lists of open source software.

https://github.com/crispengari/pneumonia-infection

(computer vision in hosipitals) this is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.
https://github.com/crispengari/pneumonia-infection

artificial-intelligence computer-vision deeplearning image-classification machine-learning python pytorch

Last synced: 3 months ago
JSON representation

(computer vision in hosipitals) this is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

Awesome Lists containing this project

README

          

### Pneumonia Classification

This is a simple `REST` api that is served to classify `pneumonia` given an X-ray image of a chest of a human beings. The following are expected results when the model does it's classification.


logo

1. pneumonia bacteria
2. pneumonia virus
3. normal


logo

### Android `apk`

You can download and install our app on android at [**here**](/apk/app.apk).

After you have downloaded the app you can follow the following steps to install it on your android phone.

### How to install this our on Android

- On devices running `Android 8.0` (API level 26) and higher, you must navigate to the `Install unknown apps system settings` screen to enable app installations from a particular location (i.e. the web browser you are downloading the app from).

- On devices running Android `7.1.1 `(API level 25) and lower, you should enable the `Unknown sources system setting`, found in `Settings` > `Security` on your device.

### App Demo video on `iOS`

> The following demo video was tested on `iOS V.16` using an iphone `X`

https://github.com/CrispenGari/pneumonia-infection/assets/59051957/a8910cd6-bf02-4119-a353-23de73e8b2cf

### Deployed Server

The deployed version of the API can be found at [PC-API](https://pc-djhy.onrender.com/) where you can make classification requests to the server and get response.

### Starting the server locally

To run this server and make prediction on your own images follow the following steps

0. clone this repository by running the following command:

```shell
git clone https://github.com/CrispenGari/pneumonia-infection.git
```

1. Navigate to the folder `pneumonia-infection` by running the following command:

```shell
cd pneumonia-infection/server
```

2. create a virtual environment and activate it, you can create a virtual environment in may ways for example on windows you can create a virtual environment by running the following command:

```shell
virtualenv venv && .\venv\Scripts\activate
```

3. run the following command to install packages

```shell
pip install -r requirements.txt
```

4. navigate to the folder where the `app.py` file is located and run

```shell
python app.py
```

### Models

We have 2 models which are specified by versions and have different model architecture.

1. **M**ulti **L**ayer **P**erceptron **(MLP)** - `v0`
2. **LeNET** - `v1`

### 1. MLP architecture

Our simple **M**ulti **L**ayer **P**erceptron **(MLP)** architecture to do the categorical image classification on `chest-x-ray` looks simply as follows:

```py
class MLP(nn.Module):
def __init__(self, input_dim, output_dim, dropout=.5):
super(MLP, self).__init__()
self.classifier = nn.Sequential(
nn.Linear(input_dim, 250),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(250, 100),
nn.ReLU(),
nn.Dropout(dropout),
nn.Linear(100, output_dim)
)
def forward(self, x):
# x = [batch size, height, width]
batch_size = x.shape[0]
x = x.view(batch_size, -1)
# x = [batch size, height * width]
x = self.classifier(x) # x = [batch_size, output_dim]
return x
```

> All images are transformed to `grayscale`.

### 2. LeNET architecture

The **`LeNet`** architecture to do the categorical image classification on `chest-x-ray` looks simply as follows:

```py
class LeNet(nn.Module):
def __init__(self, output_dim):
super(LeNet, self).__init__()
self.maxpool2d = nn.MaxPool2d(2)
self.relu = nn.ReLU()
self.convs = nn.Sequential(
nn.Conv2d(
in_channels=1,
out_channels=6,
kernel_size =5
),
nn.MaxPool2d(2),
nn.ReLU(),
nn.Conv2d(
in_channels=6,
out_channels=16,
kernel_size = 5
),
nn.MaxPool2d(2),
nn.ReLU()
)
self.classifier = nn.Sequential(
nn.Linear(16 * 5 * 5, 120),
nn.ReLU(),
nn.Linear(120, 84),
nn.ReLU(),
nn.Linear(84, output_dim)
)
def forward(self, x):
# x = [batch size, 1, 32, 32]
x = self.convs(x)
# x = [batch_size, 16, 5, 5]
x = x.view(x.shape[0], -1) # x = [batch size, 16*5*5]
x = self.classifier(x)
return x
```

### Model Metrics

First let's have a look on how many examples our dataset was having in each set. We had `3` sets which are `train`, `validation` and `test`. In the following table we will see how many examples for each set was used to train these models.



ARCHITECTURE
TRAIN EXAMPLES
VALIDATION EXAMPLES
TEST EXAMPLES
TOTAL EXAMPLES




MLP
5,442
1,135
1,135
7,712


LeNet
5,442
1,135
1,135
7,712


All models were trained for `20` epochs and the training the following table shows the training summary for each model architecture.



ARCHITECTURE
TOTAL EPOCHS
LAST SAVED EPOCH
TOTAL TRAINING TIME




MLP
20
14
1:39:17.87


LeNet
20
20
0:55:03.84

> We can see that the `mlp` model architecture took a lot of time to train for `20` epochs as compared to the `lenet` architecture. This is because of the total number of trainable parameters it has which are more that the ones that `lenet` has. We can further visualize the training time for each model using line graphs:

1. `MLP` (v0)


tt

2. `LeNet` (v1)


tt

These models have different model parameters, in the following table we are going to show the model parameters for each architecture.



ARCHITECTURE
TOTAL PARAMETERS
TRAINABLE PARAMETERS




MLP
2,329,653
2,329,653


LeNet
61,111
61,111

### Models Evaluation Metrics

In the following table we are going to show the best model's `train`, `evaluation` and `test` `accuracy` for each model version.



MODEL NAME
MODEL ARCHITECTURE
MODEL DESCRIPTION
MODEL VERSION
TEST ACCURACY
VALIDATION ACCURACY
TRAIN ACCURACY
TEST LOSS
VALIDATION LOSS
TRAIN LOSS




pneumonia_mlp.pt
MLP
pneumonia classification using Multi Layer Perceprton (MLP)
v0
75.46%
75.46%
73.89%
0.602
0.602
0.606


pneunomia_lenet.pt
LeNET
pneumonia classification model using the modified LeNet architecture.
v1
76.51%
76.51%
78.49%
0.551
0.551
0.505

We can further display visualize the model training history using line graphs for each model architecture in terms of `train loss`, `validation loss`, `train accuracy` and `validation accuracy` and see how each architecture manage to reduce loss and increase accuracy over each training epoch.

1. `MLP` (v0)


th

2. `LeNet` (v1)


th

Next, we are going to show the model evaluation metrics using the whole `test` data which contains `1,135` examples of images mapped to their labels.

### confusion matrix (`CM`)

1. `MLP` (v0)

The following visualization is of a confusion matrix based on the `MLP` model architecture which was tested using `1,135` images on the `test` dataset.

cd

2. `LeNet` (v1)
The following visualization is of a confusion matrix based on the `LeNet` model architecture which was tested using `1,135` images on the `test` dataset.

cd

### classification report (`CR`)

In this section we are going to show the summary of the `classification report` based on the best saved model of the best model.

1. `MLP` (v0)

This is the `mlp` model's `cr`.



#
precision
recall
f1-score
support




0 (NORMAL)
0.81
0.84
0.82
337


1 (BACTERIA)
0.73
0.90
0.81
442


2 (VIRUS)
0.76
0.50
0.60
356


accuracy


0.76
1135


micro avg
0.76
0.75
0.74
1135


weighted avg
0.76
0.76
0.75
1135


2. `LeNet` (v1)

This is the `LeNet` model's `cr`.



#
precision
recall
f1-score
support




0 (NORMAL)
0.88
0.82
0.85
337


1 (BACTERIA)
0.70
0.95
0.81
442


2 (VIRUS)
0.79
0.49
0.61
356


accuracy


0.77
1135


micro avg
0.79
0.75
0.75
1135


weighted avg
0.78
0.77
0.76
1135


### Pneumonia classification

During model evaluation both models were tested to see if they were classifying images correctly. A sample of `24` images was taken from the first batch of test data and here are the visual results of the classification for each model.

1. `MLP` (v0)

cd

2. `LeNet` (v1)

cd

> The images that were marked their labels in color `RED` are the ones the model if misclassifying.

### REST API

This project exposes a `REST` api that is running on port `3001` which can be configured by changing the `AppConfig` in the `app/app.py` file that looks as follows:

```py
class AppConfig:
PORT = 3001
DEBUG = False
```

The server exposes two model versions the `v0` which is the `mlp` model and `v1` which is the `lenet` architecture. When making a request to the server you need to specify the model version that you want to use to make predictions. The `URL` looks as follows:

```shell

# remote
https://pc-djhy.onrender.com/api//pneumonia

# locally
http://localhost:3001/api//pneumonia
```

The `MODEL_VERSION` an be either `v0` or `v1`. Here are the example of url's that can be used to make request to the server using these model versions.

```shell

# remote
https://pc-djhy.onrender.com/api/v0/pneumonia - mlp-model
https://pc-djhy.onrender.com/api/v1/pneumonia - lenet-model

# locally
http://localhost:3001/api/v0/pneumonia - mlp-model
http://localhost:3001/api/v1/pneumonia - lenet-model
```

> Note that all the request should be sent to the server using the `POST` method.

### Expected Response

The expected response at `http://localhost:3001/api/v0/pneumonia` or at `https://pc-djhy.onrender.com/api/v0/pneumonia` with a file `image` of the right format will yield the following `json` response to the client.

```json
{
"meta": {
"description": "given a medical chest-x-ray image of a human being we are going to classify weather a person have pneumonia virus, pneumonia bacteria or none of those(normal).",
"language": "python",
"library": "pytorch",
"main": "computer vision (cv)",
"programmer": "@crispengari"
},
"modelVersion": "v0",
"predictions": {
"all_predictions": [
{ "class_label": "NORMAL", "label": 0, "probability": 1.0 },
{ "class_label": "PNEUMONIA BACTERIA", "label": 1, "probability": 0.0 },
{ "class_label": "PNEUMONIA VIRAL", "label": 2, "probability": 0.0 }
],
"top_prediction": {
"class_label": "NORMAL",
"label": 0,
"probability": 1.0
}
},
"success": true
}
```

### Using `cURL`

Make sure that you have the image named `normal.jpeg` in the current folder that you are running your `cmd` otherwise you have to provide an absolute or relative path to the image.

> To make a `curl` `POST` request at `http://localhost:3001/api/v0/pneumonia` or at `https://pc-djhy.onrender.com/api/v0/pneumonia` with the file `normal.jpeg` we run the following command.

```shell
# remote
cURL -X POST -F image=@normal.jpeg https://pc-djhy.onrender.com/api/v0/pneumonia
# locally
cURL -X POST -F image=@normal.jpeg http://127.0.0.1:3001/api/v0/pneumonia
```

### Using Postman client

To make this request with postman we do it as follows:

1. Change the request method to `POST`
2. Click on `form-data`
3. Select type to be `file` on the `KEY` attribute
4. For the `KEY` type `image` and select the image you want to predict under `value`
5. Click send

If everything went well you will get the following response depending on the face you have selected:

```json
{
"meta": {
"description": "given a medical chest-x-ray image of a human being we are going to classify weather a person have pneumonia virus, pneumonia bacteria or none of those(normal).",
"language": "python",
"library": "pytorch",
"main": "computer vision (cv)",
"programmer": "@crispengari"
},
"modelVersion": "v0",
"predictions": {
"all_predictions": [
{ "class_label": "NORMAL", "label": 0, "probability": 1.0 },
{ "class_label": "PNEUMONIA BACTERIA", "label": 1, "probability": 0.0 },
{ "class_label": "PNEUMONIA VIRAL", "label": 2, "probability": 0.0 }
],
"top_prediction": {
"class_label": "NORMAL",
"label": 0,
"probability": 1.0
}
},
"success": true
}
```

### Using JavaScript `fetch` api.

1. First you need to get the input from `html`
2. Create a `formData` object
3. make a POST requests

```js
const online = false;
const url = online
? `https://pc-djhy.onrender.com/api/v0/pneumonia`
: "http://127.0.0.1:3001/api/v0/pneumonia";

const input = document.getElementById("input").files[0];
let formData = new FormData();
formData.append("image", input);
fetch(url, {
method: "POST",
body: formData,
})
.then((res) => res.json())
.then((data) => console.log(data));
```

If everything went well you will be able to get expected response.

```json
{
"meta": {
"description": "given a medical chest-x-ray image of a human being we are going to classify weather a person have pneumonia virus, pneumonia bacteria or none of those(normal).",
"language": "python",
"library": "pytorch",
"main": "computer vision (cv)",
"programmer": "@crispengari"
},
"modelVersion": "v0",
"predictions": {
"all_predictions": [
{ "class_label": "NORMAL", "label": 0, "probability": 1.0 },
{ "class_label": "PNEUMONIA BACTERIA", "label": 1, "probability": 0.0 },
{ "class_label": "PNEUMONIA VIRAL", "label": 2, "probability": 0.0 }
],
"top_prediction": {
"class_label": "NORMAL",
"label": 0,
"probability": 1.0
}
},
"success": true
}
```

### Notebooks

The `ipynb` notebook that i used for training the models and saving an `.pt` file was can be found in respective links bellow:

1. [Model Training And Saving - `MLP`](https://github.com/CrispenGari/cv-torch/blob/main/01-PNEUMONIA-CLASSIFICATION/01_MLP_Pneumonia.ipynb)
2. [Model Training And Saving - `LeNet`](https://github.com/CrispenGari/cv-torch/blob/main/01-PNEUMONIA-CLASSIFICATION/02_LeNet_Pneumonia.ipynb)