Use it? Tell us. #100
Replies: 7 comments
-
We use it for our custom kubernetes scheduler: |
Beta Was this translation helpful? Give feedback.
-
Putting my comment from the old repo back here: Kopf is de-facto our way to go when we need to implement a custom controller on Kubernetes. |
Beta Was this translation helpful? Give feedback.
-
The CrateDB Kubernetes Operator which has been open sourced recently is built with Kopf. |
Beta Was this translation helpful? Give feedback.
-
We are experimenting with it to manage Beam/Flink applications on our IoT platform. The main motivation is to enable an administrator to use the platform without expertise in any of the components apart from Kubernetes. Therefore, we do not want them to necessarily know how Flink deployments work, if they can use Kubernetes resources to deploy their applications. We preferred Kopf since it is so much less boilerplate than any other framework, our initial implementation was done quite quickly. |
Beta Was this translation helpful? Give feedback.
-
We are Brazil's biggest trading automation platform: https://smarttbot.com and our next-generation robot scheduling is based on Kubernetes and uses |
Beta Was this translation helpful? Give feedback.
-
We've built an Operator with kopf which automatically generates OpenTelemetry traces for |
Beta Was this translation helpful? Give feedback.
-
I built a workflow engine based on Kopf that executes DAG-style workflows. It's quite effective for handling asyncio tasks, such as calling various external APIs. Although it's mostly a side project, I'm currently using it for real automation work at several companies I work with. |
Beta Was this translation helpful? Give feedback.
-
We are happy to know that you like Kopf, and can apply it in your work.
We will be even more happy if you share with us how you use it: operators created, patterns invented, problems solved, so on. Repos? Posts? Demos? Talks? Presentations? — Everything counts.
In additional to the emotional reward for creating a useful tool, this will help us to decide on the feature roadmap, and to adjust Kopf to the real needs of real users.
For bugs, ideas, feature requests, or just questions, please open a new issue — so that other developers and users can find it later.
PoC for managing DataDog Monitor resources via kube: https://github.com/mzizzi/datadog_operator
Bridge for automatically syncing DNS entries for ingresses and services into MAAS through its API; hacky, but trying to collate a number of different systems together.
I built a first POC in 2 hours to automatically sync secrets from auth0.
Very nice and easy framework/tool!
We are currently using metacontroller for one of our CRDs but it is deprecated and I did not like some of the choices so for a second CRD I'm trying out kopf. So far I think it looks better. Our system basically wraps code of our users with some of our own code and this code is executed in multiple places in our system. Our first use case will be a wrapper around Jobs that build specific Docker images if not available yet. This way other services can request these asynchronously. The controller will lookup whether these exist already. If not it will create a Job to make and push them, if they exist already it will just put the status to Done. Other services can just interact with these CRD instances then. This also allows us to migrate more easily by creating a bunch of custom resources when we update our own code.
https://github.com/bukwp/kopfmysql some throw away code but works. The only hard thing to figure out was testing. I found https://github.com/vapor-ware/kubetest nice project but needs some love, i was thinking about some wrapper around K8s py API. Later i tried to create generic dataclasses to define multiple and more complicated crds logic around secret that could be later easy imported into kopf. See https://github.com/bukwp/kopfmysql/tree/v1alpha2/kopfmysql/kopfmysql i generally love the abstraction and idea of defining crd as a class, like dataclass which already gives nice Type hints that could be used to generate crds in yaml files with methods on the class to handle create update etc sounds briliant to me. Lookup between other resources automatically (see what i tried in secret.py and later mysqlapi) and fields validation based on Type hint could be achieved. Sorry for formatting and errors
Hi! I am creating an operator that takes a list of keys, generates random passwords, and creates a secret with those generated values. The idea is for containers that read default credentials from a secret, e.g
MYSQL_PASSWORD
I can have the operator generate them, and then inject the secret into each pod without needing to manually generate and provide themWe built an operator based on kopf to manage the aws eks aws-auth configmap (as decribed here: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html) through custom resources.
https://github.com/TierMobility/aws-auth-operator
I was looking at using kopf for our mariadb operator, however we are using a centos8 distribution deploying python 3.6.8. I assume that since kopf requires python-3.7 that you are using specific feature of that release? We are tied by the centos distribution so it is unfortunate that kopf isn't supported on python 3.6. Just looking for confirmation that kopf would not be compatible with python 3.6. Thanks
rvlane Yes, you are right. Kopf requires Python 3.7+. See #74 — it was already bound to some of the 3.7 core features a year ago when it was starting and had only few features; and now, Kopf is dependant on many more 3.7+ features, which makes 3.6 support impossible (or highly effortful).
Please consider using pyenv for isolated binary Python 3.7 builds from source, maybe some backported pre-built packages or repositories (note sure how this works in CentOS), or docker containers / k8s pods based on Python 3.7.
eshepelyuk Can you please create a separate issue for that?
nolar does kopf have any social communications tools, like gitter, twitter, slack ?
eshepelyuk No (yet). Twitter [nolar](https://twitter.com/nolar) is one way to reach me. I am also passively present in https://kubernetes.slack.com/ — there is no special chat room for Kopf, so #kubernetes-operators is a good place to start (maybe) — I have instant notifications when someone mentions me.
For feature requests and questions, it is better to use issues here in GitHub — so that the answers can be seen by others in the future; and maybe as a backlog for follow-up improvements based on the developer's feedback.
With Kopf we implemented an operator. It helps us to control the number of rising systems. This includes tasks like informing people, keeping things update-to-date, keep the system calm.
I am very excited about the coming daemon-decorator.
I'm using kopf to write a dynamic Persistent Volume provisioner for ZFS (zfs-provisioner).
I am basically rewriting my local-zfs-provisioner from golang to python because the upstream go libs have some issues I can not and don't want to deal with.
Thanks to you, I can code in python instead of in golang.
\o/
I don't see myself going back ;-)
I am using kopf for an automated provisioning of secrets and configmaps across namspaces. Therefore I introduced two crds called clusterconfigmap and clustersecret. You can also specify namespace white- or blacklist.
Repo
A lot of other ideas are in the pipeline for development.
Beta Was this translation helpful? Give feedback.
All reactions