Two definitions that we see multiple times while doing classification are precision and recall.
Let us say we have a image recognition classification algorithm that wants to classify if an image is a hot-dog or not a hot-dog.
We have 70 images where 50 are hot-dogs and 20 are not hot-dogs.
After we run our program it selects 40 images that it believes are hot-dogs and 30 are not hot-dogs.
After checking them:
From the 40 that were selected as hot-dogs 27 are actually hot-dogs and 13 are not hot-dogs
And from the 30 that were selected as not hot-dogs, 23 where hot-dogs and 7 are actually not hot-dogs.
So for the dogs
True positive: 27
True negative: 7
False positive: 13
False negative: 23
Identified as true: 40
Identified as false: 30
Precision = (True positive) / (Identified as true) which is (TP + FP) = 27 / 40
Recall = (True positive) / (All true items) which is(TP + FN) = 27 / 50
So precision is the true positive divided by the total amount of selected true items.
The recall is the true positive divided by the total amount of true items.