[GSK-1275] Importance of metrics calculated on partial data slice#1169
[GSK-1275] Importance of metrics calculated on partial data slice#1169andreybavt merged 10 commits intotask/GSK-1078from
Conversation
GSK-1275 Importance of metrics calculated on partial data slice
User KD_A on reddit pointed out that
This is right. We may have 1000 samples in our data slice, but to calculate for example the recall we only use the positive samples, which may be just a few samples out of the total, making the detection a false positive. |
240413c to
a5bf2d4
Compare
…GSK-1279] - [GSK-1275] Fixes problems with metrics calculated on small samples - [GSK-1279] Experimental support for false discovery rate control via Benjamini–Hochberg procedure
874fd77 to
11a84a5
Compare
andreybavt
left a comment
There was a problem hiding this comment.
Also LGTM with some questions
| def _calculate_affected_samples(self, y_true: np.ndarray, y_pred: np.ndarray, model: BaseModel) -> int: | ||
| if model.is_binary_classification: | ||
| # F1 score will not be affected by true negatives | ||
| neg = model.meta.classification_labels[0] |
There was a problem hiding this comment.
Why do we only do it for binary classification and not for all cases?
There was a problem hiding this comment.
Because the way the F1 is computed for multiclass is different. In our case it will use the total count of true positives, false positives, and false negatives in a one-vs-rest way for each class. So in the end it will use all the samples.
… into task/fix-scan-metrics
|
Kudos, SonarCloud Quality Gate passed! |








Fixes #1159.