Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prediction on still images has worse performance on iOS than on Lobe desktop #22

Open
technoplato opened this issue Feb 8, 2021 · 31 comments

Comments

@technoplato
Copy link

Hello, love the app, first of all. This is how training should be!

I did run into a snag: I've successfully trained the model to recognize screenshots from audible, youtube, and apple podcasts and that model works very well on desktop using the "Play" section.

However, when I export the model and use it in the iOS Bootstrap, it does very bad at recognizing any screenshot accurately.

All I've done with code is replaced LobeModel with SavedModel directly from Lobe.

Any suggestions?

Video demo link: https://www.youtube.com/watch?v=_rtKnlmsxzM

@technoplato
Copy link
Author

Disregard.

There is a popup when you go to export your model in the desktop application that asks if you'd like to optimize your model.

I clicked optimize the second go round and everything works flawlessly now. Thanks!

@technoplato technoplato changed the title Lobe - Model works perfectly with identical images on Desktop, but not on iOS bootstrap app [Solution - Optimize Model] Lobe - Model works perfectly with identical images on Desktop, but not on iOS bootstrap app Feb 8, 2021
@technoplato technoplato changed the title [Solution - Optimize Model] Lobe - Model works perfectly with identical images on Desktop, but not on iOS bootstrap app Lobe - Model works perfectly with identical images on Desktop, but not on iOS bootstrap app Feb 9, 2021
@technoplato technoplato reopened this Feb 9, 2021
@technoplato
Copy link
Author

Sorry Lobe people for the back and forth, I’m going to reopen this as the performance on device, even with the optimized model is much less accurate than that on the desktop application.

Any tips would be appreciated.

@ellbosch
Copy link
Contributor

ellbosch commented Feb 9, 2021

Thanks @technoplato—we will be deploying changes as soon as this week to fix these issues. You are correct to note that optimizing your model will not resolve this issue :)

@technoplato
Copy link
Author

Well that's great to hear! Glad I'm not crazy.

As a side note, is Object Detection getting close or is that still pretty far out in the timeline?

Thanks!

@ellbosch
Copy link
Contributor

Object detection is a rather large release and a top priority, but we can't say more than that. We are starting an insiders ring to test early features with object detection, if you are interested in joining email [email protected].

@technoplato
Copy link
Author

technoplato commented Feb 10, 2021 via email

@ellbosch
Copy link
Contributor

For the particular bug you posted, it's a bug specific to image processing. There's actually nothing wrong with the Core ML model—it should behave as you would expect. We're just plugging in a bad input to the model which creates the weird results you're seeing.

@technoplato
Copy link
Author

technoplato commented Feb 10, 2021 via email

@ellbosch
Copy link
Contributor

You are correct that it's a bug with the starter code. Here's a brief explanation on how iOS-bootstrap works:

  1. Video capture: first, the iOS device's camera transmits video feed.
  2. Image processing: a frame of the video feed needs formatting (with cropping and orientation adjustments) before we can predict.
  3. Prediction: the processed image is assigned a prediction from the Core ML model's classifier.

Step 2 above is what's broken, and should be fixed this week.

I highly recommend our docs if you'd like to learn more! Please let me know if you have any other questions.

@technoplato
Copy link
Author

technoplato commented Feb 11, 2021 via email

@ellbosch
Copy link
Contributor

Good question! Yes, we also format images selected from device library before sending to prediction (i.e. we need images formatted into a square aspect ratio).

@technoplato
Copy link
Author

technoplato commented Feb 11, 2021 via email

@ellbosch
Copy link
Contributor

ellbosch commented Feb 11, 2021

Your welcome :) the PR is here if you'd like a sneak peek of the fix: #23

@technoplato
Copy link
Author

Super clean PR - love the refactoring and more information.

Unfortunately, it actually looks like the same issue of inaccurate detection (of images that work on Desktop) as develop :(

Can I help provide any information for validating / resolving the issue?

@ellbosch
Copy link
Contributor

Shoot 😩 I'm sorry this PR didn't fix things. My guess is I was focusing too much on fixing image processing bugs for the camera feed rather than image preview mode, although I know both modes are more reliable with this upcoming PR.

I'll try to repro your specific project. I think I know why this might still be breaking on your end. I'll reach back out to you via this thread when I have an update here.

@ellbosch ellbosch changed the title Lobe - Model works perfectly with identical images on Desktop, but not on iOS bootstrap app Prediction on still images has worse performance on iOS than on Lobe desktop Feb 12, 2021
@technoplato
Copy link
Author

technoplato commented Feb 12, 2021 via email

@ellbosch
Copy link
Contributor

Hello @technoplato! I believe PR #25 might have resolved this bug. Would you mind validating this by testing the u/elbosc/63/model-preprocessing branch with your project?

@technoplato
Copy link
Author

technoplato commented Mar 25, 2021 via email

@technoplato
Copy link
Author

technoplato commented Mar 26, 2021

Hey @ellbosch I'm running into 'Network Disconnected' errors when trying to download the application either through https://lobeprod.azureedge.net/downloads/macos/Lobe.zip or through the home page (which I assume uses the same endpoint)

Can you send me the zipped application directly at [email protected]? Really looking forward to playing with the new tool and demos

EDIT: I'm trying to download it through my phone and it's working so far. Strange.

@mbeissinger
Copy link

@technoplato sent you an email with a link as well

@technoplato
Copy link
Author

Unfortunately, no dice here either. I'm going to upload my training set and see if I'm just doing something wrong: https://drive.google.com/drive/folders/1nGTCSE9dsgz1ZJDVgc4rMGmBoyNJ-wmk?usp=sharing

I'd upload the model as well but I'm suspicious that's what I'm messing up.

Received the email @mbeissinger. Going to try exporting a new model, but after trying a v.8 model with the new iOS bootstrap that @ellbosch linked above, I'm still getting poor results

@mbeissinger
Copy link

mbeissinger commented Mar 26, 2021

@technoplato is the model in iOS giving different results than from Lobe's Use tab?

One thing from quickly looking at the dataset -- it is probably overfitting since most of the images in each class look very similar to each other. I would try to prune out images that pretty much look the same and try to capture more varied examples. Here is our docs section about improving the model/dataset: https://docs.lobe.ai/docs/improving/improving#why-is-lobe-not-predicting-well-on-new-images-in-use

@technoplato
Copy link
Author

@mbeissinger Thank you for the suggestion.

I've got 99% accuracy within the Lobe Desktop application. The issue arises when I attempt to use the model in the ios-bootstrap. I believe Lobe Desktop's almost perfect accuracy implies that, despite a relatively repetitive dataset, the model "functions", even if just in the Desktop application

@mbeissinger
Copy link

@technoplato ah ok! So is the CoreML model not loading/giving any results or are they different results than in Lobe?

@technoplato
Copy link
Author

technoplato commented Mar 26, 2021 via email

@mbeissinger
Copy link

@technoplato ok great I'll see if we can replicate

@technoplato
Copy link
Author

technoplato commented Mar 26, 2021 via email

@ellbosch
Copy link
Contributor

@technoplato the only thing driving me nuts is that this bug still exists. Sorry we still haven't resolved this!

@technoplato
Copy link
Author

technoplato commented Mar 30, 2021 via email

@mbeissinger
Copy link

mbeissinger commented Mar 30, 2021

@technoplato @ellbosch I verified we have automatic tests for the model outputs going through the converters, so likely a bug in the ios processing. There will be some slight differences in outputs due to the different frameworks but they should not be major. I'm working on adding a file picker to the web-bootstrap so it isn't just the webcam -- you should be able to test the same image within Lobe and in the starter project soon. Can you also try it with the Lobe Connect local API using the same image as something you drag into the Use tab?

@technoplato
Copy link
Author

technoplato commented Apr 17, 2021

Sorry for my delay @mbeissinger finally got around to this and the Lobe Connect Run mode works perfectly!

python3 lobeconnect.py
correct label: youtube.png
predicted label:	youtube
confidence:		1
correct label: audible.png
predicted label:	audible
confidence:		1

Confirming the React App works as well. Thanks for the updates to allow static image selection!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants