Data Science, Probability, Life

Life is best understood through a probabilistic lens

Category: Data Science

Two Common Mistakes

In most ML methods (including random forests, gradient boosting, logistic regression, and neural networks), the model outputs a score, which yields a “ranking classification“.  However, there are two very common mistakes that occur in dealing with this score:

  1. Using a “default” threshold of 0.5 automatically to convert to a hard classification, rather than examing the performance across a range of thresholds.  (This is encouraged by sklearn’s convention that “model.predict” does precisely the former, while the latter requires the clunkier “model.predict_proba“)
  2. Treating the score directly as a probability, without calibrating it.  This is patently wrong when using models like random forests (where the vote proportion certainly does not indicate the probability of being a ‘1’), and inaccurate even in logistic regression (where the output purports to be a probability, but often is not well calibrated).

We’ll dive into these mistakes in more detail in future posts.

Three Problems in Binary Classification

Often students working on binary (0-1) classification problems would tell me that a particular model or approach “doesn’t work”.  When I would ask to see the results of the model (say, on a holdout set), they would show me a confusion matrix where the model predicted 0 for every data point.  When I asked what threshold they used, or what the ROC curve (or better yet, Precision-Recall Curve) looked like, only then would they start to realize that they had missed something important.

One very important aspect of binary classification that is (IMHO) not sufficiently stressed is that there are actually three different problems:

  1. Hard Classification – firmly deciding to make a hard 0/1 call for each data point in the test set.
  2. Ranking Classification – “scoring” each data point, where a higher score means more likely to be a ‘1’ (and thereby ranking the entire test set from most likely to least likely to be a ‘1’).
  3. Probability Prediction – assigning to each point a (well-calibrated) probability that it is a ‘1’.

Continue reading