Issue
I wrote a code for a confusion matrix in order to compare two lists of number following documentation online and when I thought I got good results, I noticed that the values were positioned in a weird way. First, this is the code I am using:
## Classification report and confusion matrix
import numpy as np
def evaluate_pred(y_true, y_pred):
y_test = np.array(y_true)
y_predict = np.array(y_pred)
target_names = ['Empty', 'Human', 'Dog', 'Dog&Human']
labels_names = [0,1,2,3]
print(classification_report(y_test, y_predict,labels=labels_names, target_names=target_names))
cm = confusion_matrix(y_test, y_predict,labels=labels_names, normalize='pred')
cm2 = confusion_matrix(y_test, y_predict,labels=labels_names)
disp = ConfusionMatrixDisplay(confusion_matrix=cm,display_labels=target_names)
disp = disp.plot(cmap=plt.cm.Blues,values_format='g')
disp2 = ConfusionMatrixDisplay(confusion_matrix=cm2,display_labels=target_names)
disp2 = disp2.plot(cmap=plt.cm.Blues,values_format='g')
plt.show()
and after giving it two lists (labels and prediction) I get the following result (below is the normalized matrix), but as you can see, the rows for each class are supposed to add up to the total, but instead, it's the columns that do. I tried different things but I still cannot get it fixed. There is something I am missing but I cannot figure it out. Thanks a lot for any help.
Solution
I simply had to use normalize='true'
instead of normalize='pred'
to solve the issue. it seems like setting the value to pred
considers the total of each column and then calculates the percentage based on that.
Answered By - Wazaki
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.