Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

corrupted ccz handling #779

Closed
wants to merge 1 commit into from
Closed

corrupted ccz handling #779

wants to merge 1 commit into from

Conversation

knguyen142
Copy link
Contributor

https://dimagi-dev.atlassian.net/browse/SAAS-11333

If the downloaded ccz is corrupted, we still save it in memory and continually hit the same exception.

This handles it so that if we hit those exceptions, we clear the ccz from memory so subsequent requests will try to redownload.

Testing

Tested locally.

  1. modified cchq the ccz file that is downloaded to return scratch (in cchq, instead of returning the contents we return a string like x).
  2. Try to start webapps and it will continually try to download the ccz

Screen Shot 2020-10-28 at 4 23 23 PM

3. Fix the cchq code so we return the actual contents 4. retry webapps, and it successfully downloads and runs

Issues

This can download a lot of ccz's if cchq continually returns invalid files
Screen Shot 2020-10-28 at 4 24 02 PM

I don't know if all paths to those exceptions should have the ziparchive file be removed from memory. I also couldn't test the UnresolvedResourceException exception path, which is what was the original exception in the issue.

Depends on https://github.com/dimagi/commcare-core/pull/943, do not merge until that is in this PR.

@Override
public String removeArchiveFile(String appId) {
String mGUID = super.removeArchiveFile(appId);
redisTemplate.delete(String.format("formplayer:archive:%s", mGUID));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Look into deleting the file on disk.

@ctsims
Copy link
Member

ctsims commented Oct 28, 2020

@knguyen142 One weakness in this caching code that I found a while ago is that it appears to re-use the same redis cache key across all of the servers despite the file being local #741. One downside to this is that removing the key does remove the key for every other server as well, which may or may not be worth worrying about. I'm not actually sure the redis keyed cache does very much (since it kind of can't) v. the in-memory cache.

@knguyen142
Copy link
Contributor Author

I decided to close this because we cannot reproduce the underlying issue, so these changes might end up doing more harm than good

@knguyen142 knguyen142 closed this Nov 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants