You are joking, but this is exactly what happens if you optimize accuracy of an algorithm to classify something when positive cases are very few. The algorithm will simply label everything as negative, and accuracy will be anyway extremely high!
This is also why medical studies never use accuracy as a measure if the disorder being studied is in any way rare. Sensitivity and specificity or positive/negative likelihood ratios are more common
This is actually a perfect example of why to care about the difference between accuracy, precision, and recall. This algorithm has 0 precision and 0 recall, the only advantage being that it has 100% inverse recall (all negative results are correctly classified as negative).
You are joking, but this is exactly what happens if you optimize accuracy of an algorithm to classify something when positive cases are very few. The algorithm will simply label everything as negative, and accuracy will be anyway extremely high!
This is also why medical studies never use accuracy as a measure if the disorder being studied is in any way rare. Sensitivity and specificity or positive/negative likelihood ratios are more common
This is actually a perfect example of why to care about the difference between accuracy, precision, and recall. This algorithm has 0 precision and 0 recall, the only advantage being that it has 100% inverse recall (all negative results are correctly classified as negative).
Illustration of the difference between the two from my machine learning classes in college, which was obviously just the first google result: