You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GitHub limits anonymous calls to their API to 60 per hour from the same IP (see here). Our start script requires several calls to the GitHub API when downloading a model (essentially one call per subdirectory in the model directory + 1 call to get the init.json first). This means at least 7 API calls for one model download => a user cannot download more than about 8 models per hour. This will probably not happen too often, but would still be nice to avoid.
One attempt to solve this could be to go through the GitHub Git Trees API and retrieve the whole directory tree. Unfortunately the returned results are not as informative as the current solution and would require a bunch of string fiddling to get the download URLs. It would also still require 3 API calls (as far as I understand the API for now), because we'd have to go through "master root tree" -> "models sha tree" -> "actual model sha tree" to get the sha for the actual model dir tree we're looking for (we cannot just get the root tree, because results are limited to 1000 entries, which we will probably exceed at some point).
Another solution would be for users to provide their GitHub credentials and run authenticated API calls, which are limited to 5000/hour. But I don't think we should expect that all users have a GitHub account.
Note that starting models which were downloaded before and already exist locally won't require any GitHub API calls (but updating them of course will).
The text was updated successfully, but these errors were encountered:
GitHub limits anonymous calls to their API to 60 per hour from the same IP (see here). Our start script requires several calls to the GitHub API when downloading a model (essentially one call per subdirectory in the model directory + 1 call to get the init.json first). This means at least 7 API calls for one model download => a user cannot download more than about 8 models per hour. This will probably not happen too often, but would still be nice to avoid.
One attempt to solve this could be to go through the GitHub Git Trees API and retrieve the whole directory tree. Unfortunately the returned results are not as informative as the current solution and would require a bunch of string fiddling to get the download URLs. It would also still require 3 API calls (as far as I understand the API for now), because we'd have to go through "master root tree" -> "models sha tree" -> "actual model sha tree" to get the sha for the actual model dir tree we're looking for (we cannot just get the root tree, because results are limited to 1000 entries, which we will probably exceed at some point).
Another solution would be for users to provide their GitHub credentials and run authenticated API calls, which are limited to 5000/hour. But I don't think we should expect that all users have a GitHub account.
Note that starting models which were downloaded before and already exist locally won't require any GitHub API calls (but updating them of course will).
The text was updated successfully, but these errors were encountered: