You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As I understand, through a tag based system, the plugin can assign different compute targets for each node as follows:
# excerpt from vertexai.yml# see https://kedro-vertexai.readthedocs.io/en/0.9.1/source/02_installation/02_configuration.html# Optional section allowing adjustment of the resources# reservations and limits for the nodesresources:
# For nodes that require more RAM you can increase the "memory"data-import-node:
memory: 2Gi# Training nodes can utilize more than one CPU if the algorithm# supports itmodel-training-node:
cpu: 8memory: 60Gi# GPU-capable nodes can request 1 GPU slottensorflow-node:
gpu: 1# Resources can be also configured via nodes tag# (if there is node name and tag configuration for the same# resource, tag configuration is overwritten with node one)gpu_node_tag:
cpu: 1gpu: 2# Default settings for the nodes__default__:
cpu: 200mmemory: 64Mi# Optional section allowing to configure node selectors constraints# like gpu accelerator for nodes with gpu resources.# (Note that not all accelerators are available in all# regions - https://cloud.google.com/compute/docs/gpus/gpu-regions-zones)# and not for all machines and resources configurations - # https://cloud.google.com/vertex-ai/docs/training/configure-compute#specifying_gpusnode_selectors:
gpu_node_tag:
cloud.google.com/gke-accelerator: NVIDIA_TESLA_T4tensorflow-step:
cloud.google.com/gke-accelerator: NVIDIA_TESLA_K80
As far as I recall, no, because this would require to use different component not only node selectors and currently every node is translated to ContainerOp.
Thanks @em-pe. Yep deep dived into the code and makes sense. I would love to contribute to this package, as I loved the concept of translating the kedro nodes into vertex ai nodes, grouping logic etc. It seems this is EXACTLY what I want, however with a bit more flexibility. I am embarking on a VertexAI + CloudComposer + Kedro MLOPS journey, and would share my learnings, and hopefully contribute back :)
As I understand, through a tag based system, the plugin can assign different compute targets for each node as follows:
Is there support for using Dataproc serverless components for a pipeline / node as well in this plugin?
The text was updated successfully, but these errors were encountered: