Goal:
Apply the logistic regression model to the breast cancer dataset
Load Data:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import load_breast_cancer
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import classification_report, confusion_matrix, plot_confusion_matrix, ConfusionMatrixDisplay
data_dict = load_breast_cancer()
X = pd.DataFrame(data_dict['data'], columns=data_dict['feature_names'])
y = pd.DataFrame(data_dict['target'], columns=['target'])
EDA (1):
It is important to set aside a test set at this stage to avoid data leakage, but the data has to be explored for imbalances to see if stratified samples would be best; hence EDA (1).
y.value_counts(normalize=True)
target
1 0.627417
0 0.372583
dtype: float64
There is a notable imbalance so time to put aside the test set with stratified samples.
# create train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.20, random_state=28, stratify=y)
EDA (2):
pd.options.display.max_columns=100
display(X_train.head())
display(y_train.head())
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
mean radius | mean texture | mean perimeter | mean area | mean smoothness | mean compactness | mean concavity | mean concave points | mean symmetry | mean fractal dimension | radius error | texture error | perimeter error | area error | smoothness error | compactness error | concavity error | concave points error | symmetry error | fractal dimension error | worst radius | worst texture | worst perimeter | worst area | worst smoothness | worst compactness | worst concavity | worst concave points | worst symmetry | worst fractal dimension | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
267 | 13.59 | 21.84 | 87.16 | 561.0 | 0.07956 | 0.08259 | 0.040720 | 0.021420 | 0.1635 | 0.05859 | 0.3380 | 1.9160 | 2.591 | 26.76 | 0.005436 | 0.02406 | 0.030990 | 0.009919 | 0.02030 | 0.003009 | 14.80 | 30.04 | 97.66 | 661.5 | 0.1005 | 0.17300 | 0.14530 | 0.06189 | 0.2446 | 0.07024 |
154 | 13.15 | 15.34 | 85.31 | 538.9 | 0.09384 | 0.08498 | 0.092930 | 0.034830 | 0.1822 | 0.06207 | 0.2710 | 0.7927 | 1.819 | 22.79 | 0.008584 | 0.02017 | 0.030470 | 0.009536 | 0.02769 | 0.003479 | 14.77 | 20.50 | 97.67 | 677.3 | 0.1478 | 0.22560 | 0.30090 | 0.09722 | 0.3849 | 0.08633 |
310 | 11.70 | 19.11 | 74.33 | 418.7 | 0.08814 | 0.05253 | 0.015830 | 0.011480 | 0.1936 | 0.06128 | 0.1601 | 1.4300 | 1.109 | 11.28 | 0.006064 | 0.00911 | 0.010420 | 0.007638 | 0.02349 | 0.001661 | 12.61 | 26.55 | 80.92 | 483.1 | 0.1223 | 0.10870 | 0.07915 | 0.05741 | 0.3487 | 0.06958 |
122 | 24.25 | 20.20 | 166.20 | 1761.0 | 0.14470 | 0.28670 | 0.426800 | 0.201200 | 0.2655 | 0.06877 | 1.5090 | 3.1200 | 9.807 | 233.00 | 0.023330 | 0.09806 | 0.127800 | 0.018220 | 0.04547 | 0.009875 | 26.02 | 23.99 | 180.90 | 2073.0 | 0.1696 | 0.42440 | 0.58030 | 0.22480 | 0.3222 | 0.08009 |
332 | 11.22 | 19.86 | 71.94 | 387.3 | 0.10540 | 0.06779 | 0.005006 | 0.007583 | 0.1940 | 0.06028 | 0.2976 | 1.9660 | 1.959 | 19.62 | 0.012890 | 0.01104 | 0.003297 | 0.004967 | 0.04243 | 0.001963 | 11.98 | 25.78 | 76.91 | 436.1 | 0.1424 | 0.09669 | 0.01335 | 0.02022 | 0.3292 | 0.06522 |
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
target | |
---|---|
267 | 1 |
154 | 1 |
310 | 1 |
122 | 0 |
332 | 1 |
X_train.describe()
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
mean radius | mean texture | mean perimeter | mean area | mean smoothness | mean compactness | mean concavity | mean concave points | mean symmetry | mean fractal dimension | radius error | texture error | perimeter error | area error | smoothness error | compactness error | concavity error | concave points error | symmetry error | fractal dimension error | worst radius | worst texture | worst perimeter | worst area | worst smoothness | worst compactness | worst concavity | worst concave points | worst symmetry | worst fractal dimension | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
count | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 | 455.000000 |
mean | 14.208224 | 19.309363 | 92.523033 | 664.190110 | 0.096184 | 0.104225 | 0.089821 | 0.049322 | 0.180867 | 0.062600 | 0.409321 | 1.207593 | 2.890140 | 41.518136 | 0.007037 | 0.025404 | 0.032184 | 0.011794 | 0.020385 | 0.003769 | 16.370152 | 25.643275 | 107.971011 | 896.525934 | 0.131921 | 0.253225 | 0.274383 | 0.115305 | 0.288969 | 0.083461 |
std | 3.599539 | 4.367548 | 24.844850 | 364.571393 | 0.014049 | 0.052249 | 0.081708 | 0.039482 | 0.026790 | 0.006829 | 0.289713 | 0.552007 | 2.097842 | 48.866145 | 0.003136 | 0.017505 | 0.031551 | 0.006117 | 0.008182 | 0.002678 | 4.961352 | 6.217853 | 34.541306 | 594.625227 | 0.022897 | 0.154289 | 0.210332 | 0.065869 | 0.058213 | 0.017552 |
min | 6.981000 | 9.710000 | 43.790000 | 143.500000 | 0.052630 | 0.019380 | 0.000000 | 0.000000 | 0.106000 | 0.049960 | 0.111500 | 0.360200 | 0.757000 | 6.802000 | 0.001713 | 0.002252 | 0.000000 | 0.000000 | 0.007882 | 0.000895 | 7.930000 | 12.020000 | 50.410000 | 185.200000 | 0.071170 | 0.027290 | 0.000000 | 0.000000 | 0.156500 | 0.055040 |
25% | 11.675000 | 16.165000 | 74.795000 | 417.950000 | 0.085855 | 0.066160 | 0.030410 | 0.020455 | 0.162100 | 0.057490 | 0.234100 | 0.823700 | 1.609000 | 17.860000 | 0.005036 | 0.013535 | 0.015435 | 0.007909 | 0.015015 | 0.002178 | 12.980000 | 21.005000 | 83.945000 | 512.800000 | 0.116600 | 0.147950 | 0.120350 | 0.065225 | 0.251350 | 0.070920 |
50% | 13.450000 | 18.870000 | 86.870000 | 557.200000 | 0.096390 | 0.093620 | 0.061550 | 0.033500 | 0.178800 | 0.061540 | 0.324900 | 1.095000 | 2.304000 | 24.720000 | 0.006272 | 0.020030 | 0.025860 | 0.010900 | 0.018700 | 0.003114 | 14.990000 | 25.270000 | 97.960000 | 694.400000 | 0.131300 | 0.211300 | 0.226000 | 0.100100 | 0.282900 | 0.080090 |
75% | 15.935000 | 21.825000 | 104.700000 | 790.850000 | 0.105700 | 0.129400 | 0.121500 | 0.073820 | 0.195650 | 0.066120 | 0.477850 | 1.469000 | 3.376500 | 45.390000 | 0.008156 | 0.032295 | 0.042565 | 0.014710 | 0.023145 | 0.004572 | 18.775000 | 29.880000 | 125.250000 | 1077.000000 | 0.145800 | 0.328050 | 0.381900 | 0.162650 | 0.317150 | 0.091825 |
max | 28.110000 | 39.280000 | 188.500000 | 2501.000000 | 0.144700 | 0.345400 | 0.426800 | 0.201200 | 0.290600 | 0.095750 | 2.873000 | 4.885000 | 21.980000 | 542.200000 | 0.031130 | 0.106400 | 0.396000 | 0.052790 | 0.078950 | 0.029840 | 36.040000 | 49.540000 | 251.200000 | 4254.000000 | 0.222600 | 1.058000 | 1.252000 | 0.291000 | 0.577400 | 0.207500 |
The scale of the data is very different. Scaling is necessary prior to fitting the model.
# a quick boxplot to check for egregious outliers
sns.boxplot(data=X_train, orient='h')
plt.show()
While the mean area and worst area columns have large values and what appears to be a lot of outliers, it sort of makes sense because these columns are correlated. Larger values for worst area would result in larger means for the area. No values need to be removed, but again scaling is necessary.
plotdata = pd.concat([X_train, y_train], axis='columns')
plotdata.head()
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
mean radius | mean texture | mean perimeter | mean area | mean smoothness | mean compactness | mean concavity | mean concave points | mean symmetry | mean fractal dimension | radius error | texture error | perimeter error | area error | smoothness error | compactness error | concavity error | concave points error | symmetry error | fractal dimension error | worst radius | worst texture | worst perimeter | worst area | worst smoothness | worst compactness | worst concavity | worst concave points | worst symmetry | worst fractal dimension | target | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
267 | 13.59 | 21.84 | 87.16 | 561.0 | 0.07956 | 0.08259 | 0.040720 | 0.021420 | 0.1635 | 0.05859 | 0.3380 | 1.9160 | 2.591 | 26.76 | 0.005436 | 0.02406 | 0.030990 | 0.009919 | 0.02030 | 0.003009 | 14.80 | 30.04 | 97.66 | 661.5 | 0.1005 | 0.17300 | 0.14530 | 0.06189 | 0.2446 | 0.07024 | 1 |
154 | 13.15 | 15.34 | 85.31 | 538.9 | 0.09384 | 0.08498 | 0.092930 | 0.034830 | 0.1822 | 0.06207 | 0.2710 | 0.7927 | 1.819 | 22.79 | 0.008584 | 0.02017 | 0.030470 | 0.009536 | 0.02769 | 0.003479 | 14.77 | 20.50 | 97.67 | 677.3 | 0.1478 | 0.22560 | 0.30090 | 0.09722 | 0.3849 | 0.08633 | 1 |
310 | 11.70 | 19.11 | 74.33 | 418.7 | 0.08814 | 0.05253 | 0.015830 | 0.011480 | 0.1936 | 0.06128 | 0.1601 | 1.4300 | 1.109 | 11.28 | 0.006064 | 0.00911 | 0.010420 | 0.007638 | 0.02349 | 0.001661 | 12.61 | 26.55 | 80.92 | 483.1 | 0.1223 | 0.10870 | 0.07915 | 0.05741 | 0.3487 | 0.06958 | 1 |
122 | 24.25 | 20.20 | 166.20 | 1761.0 | 0.14470 | 0.28670 | 0.426800 | 0.201200 | 0.2655 | 0.06877 | 1.5090 | 3.1200 | 9.807 | 233.00 | 0.023330 | 0.09806 | 0.127800 | 0.018220 | 0.04547 | 0.009875 | 26.02 | 23.99 | 180.90 | 2073.0 | 0.1696 | 0.42440 | 0.58030 | 0.22480 | 0.3222 | 0.08009 | 0 |
332 | 11.22 | 19.86 | 71.94 | 387.3 | 0.10540 | 0.06779 | 0.005006 | 0.007583 | 0.1940 | 0.06028 | 0.2976 | 1.9660 | 1.959 | 19.62 | 0.012890 | 0.01104 | 0.003297 | 0.004967 | 0.04243 | 0.001963 | 11.98 | 25.78 | 76.91 | 436.1 | 0.1424 | 0.09669 | 0.01335 | 0.02022 | 0.3292 | 0.06522 | 1 |
sns.pairplot(data=plotdata.iloc[:,21::], hue='target', kind='kde')
plt.show()
Modeling
# set up the model steps
# 1-scaling, 2-logistic regression modeling
steps = [('scaler',StandardScaler()),('lr',LogisticRegression())]
# create the pipeline using steps and custom names for the steps
lr_pipeline = Pipeline(steps)
# set up the parameters of the model to search over
parameters = {'lr__penalty':['l1', 'l2'],
'lr__C':[.001, .01, .1, 1],
'lr__class_weight':['balanced', None],
'lr__solver':['liblinear']}
# model
lr_model = GridSearchCV(lr_pipeline, parameters, cv=5)
lr_model
GridSearchCV(cv=5,
estimator=Pipeline(steps=[('scaler', StandardScaler()),
('lr', LogisticRegression())]),
param_grid={'lr__C': [0.001, 0.01, 0.1, 1],
'lr__class_weight': ['balanced', None],
'lr__penalty': ['l1', 'l2'],
'lr__solver': ['liblinear']})
# fit the model
lr_model.fit(X_train.values, y_train.values.ravel())
GridSearchCV(cv=5,
estimator=Pipeline(steps=[('scaler', StandardScaler()),
('lr', LogisticRegression())]),
param_grid={'lr__C': [0.001, 0.01, 0.1, 1],
'lr__class_weight': ['balanced', None],
'lr__penalty': ['l1', 'l2'],
'lr__solver': ['liblinear']})
# the best model
lr_model.best_estimator_
Pipeline(steps=[('scaler', StandardScaler()),
('lr', LogisticRegression(C=0.1, solver='liblinear'))])
The best model here shows that using default parameter values for penalty and class weghts, and the liblinear solver produced the best output via stratified cross-validation. The following shows the cv results dataframe for the best estimator identified via the grid search object.
display(pd.DataFrame(lr_model.cv_results_))
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
mean_fit_time | std_fit_time | mean_score_time | std_score_time | param_lr__C | param_lr__class_weight | param_lr__penalty | param_lr__solver | params | split0_test_score | split1_test_score | split2_test_score | split3_test_score | split4_test_score | mean_test_score | std_test_score | rank_test_score | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0.002227 | 0.000132 | 0.000673 | 0.000132 | 0.001 | balanced | l1 | liblinear | {'lr__C': 0.001, 'lr__class_weight': 'balanced... | 0.373626 | 0.373626 | 0.373626 | 0.373626 | 0.373626 | 0.373626 | 0.000000 | 15 |
1 | 0.002761 | 0.000169 | 0.000791 | 0.000248 | 0.001 | balanced | l2 | liblinear | {'lr__C': 0.001, 'lr__class_weight': 'balanced... | 0.934066 | 0.967033 | 0.945055 | 0.945055 | 0.923077 | 0.942857 | 0.014579 | 11 |
2 | 0.001681 | 0.000323 | 0.000563 | 0.000225 | 0.001 | None | l1 | liblinear | {'lr__C': 0.001, 'lr__class_weight': None, 'lr... | 0.373626 | 0.373626 | 0.373626 | 0.373626 | 0.373626 | 0.373626 | 0.000000 | 15 |
3 | 0.001417 | 0.000123 | 0.000343 | 0.000024 | 0.001 | None | l2 | liblinear | {'lr__C': 0.001, 'lr__class_weight': None, 'lr... | 0.912088 | 0.967033 | 0.956044 | 0.912088 | 0.934066 | 0.936264 | 0.022413 | 12 |
4 | 0.002136 | 0.000140 | 0.000424 | 0.000087 | 0.01 | balanced | l1 | liblinear | {'lr__C': 0.01, 'lr__class_weight': 'balanced'... | 0.901099 | 0.923077 | 0.912088 | 0.945055 | 0.879121 | 0.912088 | 0.021978 | 14 |
5 | 0.005018 | 0.005448 | 0.000643 | 0.000206 | 0.01 | balanced | l2 | liblinear | {'lr__C': 0.01, 'lr__class_weight': 'balanced'... | 0.956044 | 0.978022 | 0.978022 | 0.945055 | 0.956044 | 0.962637 | 0.013187 | 9 |
6 | 0.002407 | 0.000563 | 0.000540 | 0.000208 | 0.01 | None | l1 | liblinear | {'lr__C': 0.01, 'lr__class_weight': None, 'lr_... | 0.901099 | 0.934066 | 0.912088 | 0.945055 | 0.879121 | 0.914286 | 0.023466 | 13 |
7 | 0.001878 | 0.000160 | 0.000406 | 0.000023 | 0.01 | None | l2 | liblinear | {'lr__C': 0.01, 'lr__class_weight': None, 'lr_... | 0.945055 | 0.989011 | 0.967033 | 0.945055 | 0.945055 | 0.958242 | 0.017582 | 10 |
8 | 0.001866 | 0.000141 | 0.000339 | 0.000020 | 0.1 | balanced | l1 | liblinear | {'lr__C': 0.1, 'lr__class_weight': 'balanced',... | 0.956044 | 0.978022 | 0.956044 | 0.989011 | 0.956044 | 0.967033 | 0.013900 | 8 |
9 | 0.001891 | 0.000133 | 0.000326 | 0.000012 | 0.1 | balanced | l2 | liblinear | {'lr__C': 0.1, 'lr__class_weight': 'balanced',... | 0.978022 | 0.989011 | 0.967033 | 0.978022 | 0.967033 | 0.975824 | 0.008223 | 4 |
10 | 0.001682 | 0.000101 | 0.000314 | 0.000010 | 0.1 | None | l1 | liblinear | {'lr__C': 0.1, 'lr__class_weight': None, 'lr__... | 0.978022 | 0.989011 | 0.967033 | 0.956044 | 0.967033 | 0.971429 | 0.011207 | 6 |
11 | 0.001662 | 0.000140 | 0.000300 | 0.000008 | 0.1 | None | l2 | liblinear | {'lr__C': 0.1, 'lr__class_weight': None, 'lr__... | 0.989011 | 0.989011 | 0.967033 | 0.978022 | 0.967033 | 0.978022 | 0.009829 | 1 |
12 | 0.002521 | 0.000377 | 0.000385 | 0.000073 | 1 | balanced | l1 | liblinear | {'lr__C': 1, 'lr__class_weight': 'balanced', '... | 0.967033 | 1.000000 | 0.967033 | 0.989011 | 0.967033 | 0.978022 | 0.013900 | 1 |
13 | 0.002215 | 0.000045 | 0.000339 | 0.000026 | 1 | balanced | l2 | liblinear | {'lr__C': 1, 'lr__class_weight': 'balanced', '... | 0.978022 | 0.978022 | 0.956044 | 0.978022 | 0.967033 | 0.971429 | 0.008791 | 7 |
14 | 0.002766 | 0.000268 | 0.000480 | 0.000116 | 1 | None | l1 | liblinear | {'lr__C': 1, 'lr__class_weight': None, 'lr__pe... | 0.956044 | 1.000000 | 0.967033 | 0.989011 | 0.967033 | 0.975824 | 0.016150 | 4 |
15 | 0.002173 | 0.000046 | 0.000350 | 0.000020 | 1 | None | l2 | liblinear | {'lr__C': 1, 'lr__class_weight': None, 'lr__pe... | 0.978022 | 1.000000 | 0.978022 | 0.967033 | 0.967033 | 0.978022 | 0.012038 | 1 |
# get the predictions with the predict function
y_pred = lr_model.predict(X_test)
str(round(lr_model.best_score_,2)*100)+'% Accuracy'
'98.0% Accuracy'
The confusion matrix is below:
# manual confusion matrix calculation
cm = confusion_matrix(y_test, y_pred)
cm
array([[41, 1],
[ 0, 72]])
# better visual for the confusion matrix
plot_confusion_matrix(lr_model, X_test, y_test)
plt.show()
The classification report is below:
classification_report(y_test, y_pred).split('\n')
[' precision recall f1-score support',
'',
' 0 1.00 0.98 0.99 42',
' 1 0.99 1.00 0.99 72',
'',
' accuracy 0.99 114',
' macro avg 0.99 0.99 0.99 114',
'weighted avg 0.99 0.99 0.99 114',
'']
The interpretation of the results are more important than model accuracy. Which label is for malignant vs. benign? Is the level of accuracy acceptable given that labeling?
data_dict.keys()
dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR', 'feature_names', 'filename'])
data_dict['DESCR'].split('\n')
['.. _breast_cancer_dataset:',
'',
'Breast cancer wisconsin (diagnostic) dataset',
'--------------------------------------------',
'',
'**Data Set Characteristics:**',
'',
' :Number of Instances: 569',
'',
' :Number of Attributes: 30 numeric, predictive attributes and the class',
'',
' :Attribute Information:',
' - radius (mean of distances from center to points on the perimeter)',
' - texture (standard deviation of gray-scale values)',
' - perimeter',
' - area',
' - smoothness (local variation in radius lengths)',
' - compactness (perimeter^2 / area - 1.0)',
' - concavity (severity of concave portions of the contour)',
' - concave points (number of concave portions of the contour)',
' - symmetry',
' - fractal dimension ("coastline approximation" - 1)',
'',
' The mean, standard error, and "worst" or largest (mean of the three',
' worst/largest values) of these features were computed for each image,',
' resulting in 30 features. For instance, field 0 is Mean Radius, field',
' 10 is Radius SE, field 20 is Worst Radius.',
'',
' - class:',
' - WDBC-Malignant',
' - WDBC-Benign',
'',
' :Summary Statistics:',
'',
' ===================================== ====== ======',
' Min Max',
' ===================================== ====== ======',
' radius (mean): 6.981 28.11',
' texture (mean): 9.71 39.28',
' perimeter (mean): 43.79 188.5',
' area (mean): 143.5 2501.0',
' smoothness (mean): 0.053 0.163',
' compactness (mean): 0.019 0.345',
' concavity (mean): 0.0 0.427',
' concave points (mean): 0.0 0.201',
' symmetry (mean): 0.106 0.304',
' fractal dimension (mean): 0.05 0.097',
' radius (standard error): 0.112 2.873',
' texture (standard error): 0.36 4.885',
' perimeter (standard error): 0.757 21.98',
' area (standard error): 6.802 542.2',
' smoothness (standard error): 0.002 0.031',
' compactness (standard error): 0.002 0.135',
' concavity (standard error): 0.0 0.396',
' concave points (standard error): 0.0 0.053',
' symmetry (standard error): 0.008 0.079',
' fractal dimension (standard error): 0.001 0.03',
' radius (worst): 7.93 36.04',
' texture (worst): 12.02 49.54',
' perimeter (worst): 50.41 251.2',
' area (worst): 185.2 4254.0',
' smoothness (worst): 0.071 0.223',
' compactness (worst): 0.027 1.058',
' concavity (worst): 0.0 1.252',
' concave points (worst): 0.0 0.291',
' symmetry (worst): 0.156 0.664',
' fractal dimension (worst): 0.055 0.208',
' ===================================== ====== ======',
'',
' :Missing Attribute Values: None',
'',
' :Class Distribution: 212 - Malignant, 357 - Benign',
'',
' :Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian',
'',
' :Donor: Nick Street',
'',
' :Date: November, 1995',
'',
'This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.',
'https://goo.gl/U2Uwz2',
'',
'Features are computed from a digitized image of a fine needle',
'aspirate (FNA) of a breast mass. They describe',
'characteristics of the cell nuclei present in the image.',
'',
'Separating plane described above was obtained using',
'Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree',
'Construction Via Linear Programming." Proceedings of the 4th',
'Midwest Artificial Intelligence and Cognitive Science Society,',
'pp. 97-101, 1992], a classification method which uses linear',
'programming to construct a decision tree. Relevant features',
'were selected using an exhaustive search in the space of 1-4',
'features and 1-3 separating planes.',
'',
'The actual linear program used to obtain the separating plane',
'in the 3-dimensional space is that described in:',
'[K. P. Bennett and O. L. Mangasarian: "Robust Linear',
'Programming Discrimination of Two Linearly Inseparable Sets",',
'Optimization Methods and Software 1, 1992, 23-34].',
'',
'This database is also available through the UW CS ftp server:',
'',
'ftp ftp.cs.wisc.edu',
'cd math-prog/cpo-dataset/machine-learn/WDBC/',
'',
'.. topic:: References',
'',
' - W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction ',
' for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on ',
' Electronic Imaging: Science and Technology, volume 1905, pages 861-870,',
' San Jose, CA, 1993.',
' - O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and ',
' prognosis via linear programming. Operations Research, 43(4), pages 570-577, ',
' July-August 1995.',
' - W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques',
' to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994) ',
' 163-171.']
The above description states that there are 212 malignant samples and 357 benign. By matching this up with the data available we can determine for sure (opposed to assuming) which target value matches to the target name.
pd.Series(data_dict['target']).value_counts()
1 357
0 212
dtype: int64
This shows that here the positive value is actually zero. Which means the presence of a malignant breast tumor is identified by zero in predictions. Inspecting the classification report shows that for the business objective of predicting the presence of a malignant tumor, the accuracies are:
Precision: 100%
Recall: 98%
F1-score: 99%
classification_report(y_test, y_pred).split('\n')
[' precision recall f1-score support',
'',
' 0 1.00 0.98 0.99 42',
' 1 0.99 1.00 0.99 72',
'',
' accuracy 0.99 114',
' macro avg 0.99 0.99 0.99 114',
'weighted avg 0.99 0.99 0.99 114',
'']
# confusion matrix with ready to read labels
ConfusionMatrixDisplay(cm, display_labels=['Malignant','Benign']).plot()
plt.show()
This result is questionable from a business perspective because although we have a high precision, which means all the samples identified as malignant were in fact malignant, the recall allowed one to slip through the cracks. This would translate to a failure to identify a malignant sample for a patient which is unacceptable. The second iteration would be too change the class weights and in attempt to gain perfect recall even if the precision drops. It would be much better to identify a malignant sample when one is NOT present, than fail to identify a malignant sample when it in fact is.