Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: OpenVINOInferencer does not auto resize the images leading to wrong inference results #2266

Open
1 task done
FedericoDeBona opened this issue Aug 21, 2024 · 1 comment

Comments

@FedericoDeBona
Copy link

FedericoDeBona commented Aug 21, 2024

Describe the bug

I'm using a custom dataset with the same structure of MVTec datasets. When inferencing with OpenVINO, if the image is not resized to the train image size, the prediction will give wrong results. I think the same problem is also #2136

Wrong prediction

Inferenced with inferencer.predict(image=image_path) where the images are in 1440x1440
openvino
in this dataset images are 1024x1024
bad

Correct prediction

Inferenced with

image = Image.open(img_path)
image = image.resize((256, 256))
predictions = inferencer.predict(image=image)
openvino resized good

Torch inferencer

Using Torch inferencer the result is correct without resizing the images.

Code used for training:

datamodule = MVTec(
    category=CATEGORY,
    root=f"{ROOT}/datasets_original",
    image_size=(256,256))
model = Padim()
engine = Engine(default_root_dir=RESULTS_DIR)
engine.fit(datamodule=datamodule, model=model)
engine.export(
    model=model,
    export_type=ExportType.OPENVINO,
    compression_type=CompressionType.INT8
)
inferencer = OpenVINOInferencer(
    path=f"{RESULTS_DIR}/{MODEL}/MVTec/{CATEGORY}/latest/weights/openvino/model.bin",
    metadata=f"{RESULTS_DIR}/{MODEL}/MVTec/{CATEGORY}/latest/weights/openvino/metadata.json",
    device="CPU")

Dataset

Other (please specify in the text field below)

Model

PADiM

Steps to reproduce the behavior

see above

OS information

OS information:

  • OS: Ubuntu 24
  • Python version: 3.10.14
  • Anomalib version: 1.2.0dev
  • PyTorch version: 2.4.0+cu118
  • CUDA/cuDNN version: 11.8
  • GPU models and configuration: GeForce RTX 3090 Ti
    I'm using a custom dataset

Expected behavior

OpenVINOInferencer predicts correctly without resizing the images

Screenshots

Pip/GitHub

GitHub

What version/branch did you use?

No response

Configuration YAML

-

Logs

-

Code of Conduct

  • I agree to follow this project's Code of Conduct
@watertianyi
Copy link

@FedericoDeBona

I used patchcore to convert OPENVINO, and the following error occurred when using cpu: image = cv2.resize(image, tuple(list(self.input_blob.shape)[2:][::-1]))
RuntimeError: Exception from src/core/src/partial_shape.cpp:266:
to_shape was called on a dynamic shape. Do you know why? I also get an error when using GPU. Can I not use NVIDIA GPU?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants