You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This idea is borrowed from Azure ML (and this PR getindata/kedro-azureml#15 ) - where you define an Environment, which is a docker image which runs your image, but the code is not part of the image (only dependencies are present in the image).
The workflow for that will make Data Science iterations faster, as they will not have to build the docker image every time they want to run / debug something in Vertex AI. This issue itself will be partially addressed by #81 , but this would be a next iteration on that.
General workflow would work like this:
Docker image with dependencies is uploaded to the container registry.
User runs kedro vertexai run-once with some flag (or maybe we should have kedro vertexai run for docker and kedro vertexai run-once for this flow 💡)
The code of Kedro project is copied to GCS (first packaged and compressed) and the job is started within the container. The container should have a modified entrypoint which will first download the code from GCS and then execute it.
Please discuss the design with @em-pe and @szczeles before implemeting.
The text was updated successfully, but these errors were encountered:
@adrienpl Agree! The current development cycle with docker images is so painful...
Actually, I've been working on some implementation of this feature a year ago. Once I find the local branch, I will push it so somebody can take it over (I'm not into kedro/vertex anymore).
This idea is borrowed from Azure ML (and this PR getindata/kedro-azureml#15 ) - where you define an
Environment
, which is a docker image which runs your image, but the code is not part of the image (only dependencies are present in the image).The workflow for that will make Data Science iterations faster, as they will not have to build the docker image every time they want to run / debug something in Vertex AI. This issue itself will be partially addressed by #81 , but this would be a next iteration on that.
General workflow would work like this:
kedro vertexai run-once
with some flag (or maybe we should havekedro vertexai run
for docker andkedro vertexai run-once
for this flow 💡)Please discuss the design with @em-pe and @szczeles before implemeting.
The text was updated successfully, but these errors were encountered: