Confusion Matrix Calculator

Analyze classification model performance with confusion matrix metrics

Confusion Matrix Calculator

Correctly predicted positive cases

Correctly predicted negative cases

Incorrectly predicted positive cases

Incorrectly predicted negative cases

How the Calculator Works

This calculator helps you evaluate classification model performance using confusion matrix metrics:

  • Enter the four basic confusion matrix values:
    • True Positives (TP): Correctly predicted positive cases
    • True Negatives (TN): Correctly predicted negative cases
    • False Positives (FP): Incorrectly predicted positive cases
    • False Negatives (FN): Incorrectly predicted negative cases
  • The calculator will compute:
    • Accuracy: Overall correct predictions
    • Precision: Positive predictive value
    • Recall: True positive rate (Sensitivity)
    • F1 Score: Harmonic mean of precision and recall
    • Specificity: True negative rate
    • False Positive and Negative Rates
Understanding the Metrics

Key metrics and their formulas:

  • Accuracy = (TP + TN) / (TP + TN + FP + FN)

    Measures overall correct predictions across all classes

  • Precision = TP / (TP + FP)

    How many of the predicted positives are actually positive

  • Recall = TP / (TP + FN)

    How many of the actual positives are correctly identified

  • F1 Score = 2 × (Precision × Recall) / (Precision + Recall)

    Harmonic mean balancing precision and recall

  • Specificity = TN / (TN + FP)

    How many of the actual negatives are correctly identified

Applications and Uses

Confusion matrices are used in various fields:

  • Machine learning model evaluation
  • Medical diagnosis testing
  • Quality control systems
  • Fraud detection
  • Spam filtering
  • Image classification
  • Natural language processing
Frequently Asked Questions

Which metric should I focus on?

It depends on your application. Use accuracy for balanced datasets, precision when false positives are costly, recall when false negatives are costly, and F1 score when you need a balance between precision and recall.

What's the difference between accuracy and precision?

Accuracy measures overall correct predictions (both positive and negative), while precision focuses on the accuracy of positive predictions only. High accuracy doesn't always mean high precision, especially with imbalanced datasets.

When should I use the F1 score?

Use the F1 score when you need a single metric that balances precision and recall, especially with imbalanced datasets. It's particularly useful when both false positives and false negatives have similar costs.