Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: F1AdaptiveThreshold incorrectly selects minimum value when no anomalous images are in validation set #2433

Open
1 task done
tanemaki opened this issue Nov 25, 2024 · 0 comments · May be fixed by #2437
Open
1 task done

Comments

@tanemaki
Copy link
Contributor

Describe the bug

The F1AdaptiveThreshold is designed to select the optimal threshold for separating normal and anomalous classes based on the F1 score. Ideally, this optimal threshold should correspond to the "maximum" predicted value when all images are normal. However, the current implementation incorrectly uses the "minimum" predicted value instead. This error leads to an abnormally low threshold when the model is trained exclusively with normal images, resulting in an excessively high false alarm rate.

Dataset

N/A

Model

N/A

Steps to reproduce the behavior

labels = torch.tensor([0, 0, 0, 0])
preds = torch.tensor([1.0, 2.0, 3.0, 4.0])

adaptive_threshold = F1AdaptiveThreshold(default_value=0.5)
threshold = adaptive_threshold(preds, labels)

# now threshold is tensor(1.0) but it should be tensor(4.0) because all images are normal

OS information

OS information:

  • OS: macOS Sequoia 15.1.1
  • Python version: 3.10.14
  • Anomalib version: 2.0.0dev
  • PyTorch version: 2.5.1
  • CUDA/cuDNN version: N/A
  • GPU models and configuration: N/A
  • Any other relevant information: N/A

Expected behavior

When all images are normal, the optimized value of F1AdaptiveThreshold should be the "maximum" predicted value.

In other words, the F1AdaptiveThreshold should pass the following test case:

labels = torch.tensor([0, 0, 0, 0])
preds = torch.tensor([1.0, 2.0, 3.0, 4.0])

adaptive_threshold = F1AdaptiveThreshold(default_value=0.5)
adaptive_threshold.update(preds, labels)
threshold_value = adaptive_threshold.compute()

assert threshold_value == 4.0

Screenshots

No response

Pip/GitHub

GitHub

What version/branch did you use?

bcc0b43

Configuration YAML

N/A

Logs

N/A

Code of Conduct

  • I agree to follow this project's Code of Conduct
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment