To propose and implement an intelligent real time heartbeat classification algorithm and supplement results with Explainable AI
Cardio Vascular Diseases happens to be the major contributor of death rate. Heartbeat is a basic physiological function of the human body and it indicates and helps a lot in investigation of heart function. One non-invasive method of assessing heart function is using an ECG. The dataset provided for this challenge has 17 classes of ECGs. There are attempts made classifying this data using different approaches, please refer online for sources. There is one article which is provided in the references section for you to understand the problem better
I devised an innovative algorithm, for the classification of ECG into 17 classes.
- Firstly, the algorithm enhances the provided dataset, by using a roll-over technique, such that each class is populated with new cases and a balanced dataset is formed.
- Secondly, a dual path Deep Architecture is devised, with analysing various provided parameters.
-
- For the cases which have lestt cases in the provided dataset, the dataset for that class is equally rolled into multiple samples alternatively, using
numpy.roll()
. - Here I used this rollover in clockwise and anticlockwise fashion on alternate samples from the dataset, to make the provided dataset rich.
- To balance the roll for each set, i rolled the dataset to a multiple of two, into clockwise and anticlockwise direction alternatively.
- Hence the transformed dataset is balanced, without losing its significance.
- For the cases which have lestt cases in the provided dataset, the dataset for that class is equally rolled into multiple samples alternatively, using
ClassNum | ClassName | SetCount |
---|---|---|
0 | 6 WPW | 273 |
1 | 5 SVTA | 273 |
2 | 2 APB | 264 |
3 | 15 RBBBB | 248 |
4 | 11 IVR | 280 |
5 | 4 AFIB | 270 |
6 | 7 PVC | 270 |
7 | 1 NSR | 283 |
8 | 13 Fusion | 275 |
9 | 9 Trigemy | 273 |
10 | 3 AFL | 280 |
11 | 12 VFL | 280 |
12 | 14 LBBBB | 206 |
13 | 16 SDHB | 280 |
14 | 8 Bigeminy | 275 |
15 | 17 PR | 270 |
16 | 10 VT | 280 |
-
- As I mentioned, the data has been transformed by using roll-over technique, so to counter it model has been made specifically such that, for one path it takes the given data, and for the other it takes the reverse of the same data. So it counters, the clockwise and anti-clockwise roll-over applied and gives accurate predictions.
- Model contains the following sub-models:
ConvBlock (x3)
(
(conv1d_1): Conv1d(1, 4, kernel_size=(5,), stride=(2,))
(batch_norm_1d_1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(conv1d_2): Conv1d(4, 16, kernel_size=(4,), stride=(2,))
(batch_norm_1d_2): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(conv1d_3): Conv1d(16, 32, kernel_size=(4,), stride=(2,))
(batch_norm_1d_3): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(conv1d_4): Conv1d(32, 32, kernel_size=(4,), stride=(2,))
(batch_norm_1d_4): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
)
LinearBlock (x3)
(
(linear_1d_1): Linear(in_features=7136, out_features=2048, bias=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(linear_1d_2): Linear(in_features=2048, out_features=1024, bias=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(linear_1d_3): Linear(in_features=1024, out_features=512, bias=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(linear_1d_4): Linear(in_features=512, out_features=256, bias=True)
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
)
Model for the same can be accessed from : Deep Learning Attentive Model
CnnBiLSTM1D
(
(tanh): Tanh()
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(softmax): Softmax(dim=1)
(block_1):
InitConv
(
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(flatten): Flatten(start_dim=1, end_dim=-1)
(tanh): Tanh()
(softmax): Softmax(dim=1)
(conv1d_1): Conv1d(1, 4, kernel_size=(5,), stride=(2,))
(batch_norm_1d_1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1d_2): Conv1d(4, 16, kernel_size=(4,), stride=(2,))
(batch_norm_1d_2): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1d_3): Conv1d(16, 32, kernel_size=(4,), stride=(2,))
(batch_norm_1d_3): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1d_4): Conv1d(32, 32, kernel_size=(4,), stride=(2,))
(batch_norm_1d_4): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(linear_1d_1): Linear(in_features=7136, out_features=2048, bias=True)
(linear_1d_2): Linear(in_features=2048, out_features=1024, bias=True)
(linear_1d_3): Linear(in_features=1024, out_features=512, bias=True)
(linear_1d_4): Linear(in_features=512, out_features=256, bias=True)
(attention_linear_1d_1): Linear(in_features=256, out_features=256, bias=True)
)
(block_2):
InitConv
(
(relu): ReLU()
(dropout_): Dropout(p=0.005, inplace=False)
(flatten): Flatten(start_dim=1, end_dim=-1)
(tanh): Tanh()
(softmax): Softmax(dim=1)
(conv1d_1): Conv1d(1, 4, kernel_size=(5,), stride=(2,))
(batch_norm_1d_1): BatchNorm1d(4, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1d_2): Conv1d(4, 16, kernel_size=(4,), stride=(2,))
(batch_norm_1d_2): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1d_3): Conv1d(16, 32, kernel_size=(4,), stride=(2,))
(batch_norm_1d_3): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv1d_4): Conv1d(32, 32, kernel_size=(4,), stride=(2,))
(batch_norm_1d_4): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(linear_1d_1): Linear(in_features=7136, out_features=2048, bias=True)
(linear_1d_2): Linear(in_features=2048, out_features=1024, bias=True)
(linear_1d_3): Linear(in_features=1024, out_features=512, bias=True)
(linear_1d_4): Linear(in_features=512, out_features=256, bias=True)
(attention_linear_1d_1): Linear(in_features=256, out_features=256, bias=True)
)
(bilinear): Bilinear(in1_features=256, in2_features=256, out_features=256, bias=True)
(linear_1): Linear(in_features=1024, out_features=512, bias=True)
(linear_2): Linear(in_features=512, out_features=256, bias=True)
(linear_3): Linear(in_features=256, out_features=128, bias=True)
(linear_4): Linear(in_features=128, out_features=64, bias=True)
(out): Linear(in_features=64, out_features=17, bias=True)
(conv_1d_1): Conv1d(1, 8, kernel_size=(4,), stride=(2,))
(batch_norm_1d_1): BatchNorm1d(8, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(bi_lstm_1): LSTM(256, 512, num_layers=2, dropout=0.005, bidirectional=True)
(bi_lstm_2): LSTM(1024, 1024, num_layers=2, dropout=0.002, bidirectional=True)
)
Test Loss {Cross Entropy Loss} |-> 0.102077
Test Accuracy of {0}[6 WPW] |-> 100.000000% (52/52)
Test Accuracy of {1}[5 SVTA] |-> 100.000000% (51/51)
Test Accuracy of {2}[2 APB] |-> 92.307692% (36/39)
Test Accuracy of {3}[15 RBBBB] |-> 94.642857% (53/56)
Test Accuracy of {4}[11 IVR] |-> 100.000000% (62/62)
Test Accuracy of {5}[4 AFIB] |-> 96.296296% (52/54)
Test Accuracy of {6}[7 PVC] |-> 91.836735% (45/49)
Test Accuracy of {7}[1 NSR] |-> 98.360656% (60/61)
Test Accuracy of {8}[13 Fusion] |-> 100.000000% (61/61)
Test Accuracy of {9}[9 Trigemy] |-> 98.000000% (49/50)
Test Accuracy of {10}[3 AFL] |-> 98.305085% (58/59)
Test Accuracy of {11}[12 VFL] |-> 100.000000% (59/59)
Test Accuracy of {12}[14 LBBBB] |-> 100.000000% (44/44)
Test Accuracy of {13}[16 SDHB] |-> 100.000000% (47/47)
Test Accuracy of {14}[8 Bigeminy] |-> 98.039216% (50/51)
Test Accuracy of {15}[17 PR] |-> 98.214286% (55/56)
Test Accuracy of {16}[10 VT] |-> 98.437500% (63/64)
Test Accuracy {Overall} |-> 98.032787% (897/915)