This guide walks through an example of building a simple memcached-operator using the operator-sdk CLI tool and controller-runtime library API. To learn how to use Ansible or Helm to create an operator, see the Ansible Operator User Guide or the Helm Operator User Guide. The rest of this document will show how to program an operator in Go.
- git
- go version v1.12+.
- mercurial version 3.9+
- docker version 17.03+.
- kubectl version v1.11.3+.
- Access to a Kubernetes v1.11.3+ cluster.
Note: This guide uses minikube version v0.25.0+ as the local Kubernetes cluster and quay.io for the public registry.
Follow the steps in the installation guide to learn how to install the Operator SDK CLI tool.
Use the CLI to create a new memcached-operator project:
$ mkdir -p $HOME/projects
$ cd $HOME/projects
$ operator-sdk new memcached-operator --repo=github.com/example-inc/memcached-operator
$ cd memcached-operator
To learn about the project directory structure, see project layout doc.
operator-sdk new
generates a go.mod
file to be used with Go modules. The --repo=<path>
flag is required when creating a project outside of $GOPATH/src
, as scaffolded files require a valid module path. Ensure you activate module support before using the SDK. From the Go modules Wiki:
You can activate module support in one of two ways:
- Invoke the go command in a directory with a valid go.mod file in the current directory or any parent of it and the environment variable GO111MODULE unset (or explicitly set to auto).
- Invoke the go command with GO111MODULE=on environment variable set.
By default --vendor=false
, so an operator's dependencies are downloaded and cached in the Go modules cache. Calls to go {build,clean,get,install,list,run,test}
by operator-sdk
subcommands will use an external modules directory. Execute go help modules
for more information.
The Operator SDK can create a vendor
directory for Go dependencies if the project is initialized with --vendor=true
.
Read the operator scope documentation on how to run your operator as namespace-scoped vs cluster-scoped.
The main program for the operator cmd/manager/main.go
initializes and runs the Manager.
The Manager will automatically register the scheme for all custom resources defined under pkg/apis/...
and run all controllers under pkg/controller/...
.
The Manager can restrict the namespace that all controllers will watch for resources:
mgr, err := manager.New(cfg, manager.Options{Namespace: namespace})
By default this will be the namespace that the operator is running in. To watch all namespaces leave the namespace option empty:
mgr, err := manager.New(cfg, manager.Options{Namespace: ""})
By default the main program will set the manager's namespace using the value of WATCH_NAMESPACE
env defined in deploy/operator.yaml
.
Add a new Custom Resource Definition(CRD) API called Memcached, with APIVersion cache.example.com/v1alpha1
and Kind Memcached
.
$ operator-sdk add api --api-version=cache.example.com/v1alpha1 --kind=Memcached
This will scaffold the Memcached resource API under pkg/apis/cache/v1alpha1/...
.
Modify the spec and status of the Memcached
Custom Resource(CR) at pkg/apis/cache/v1alpha1/memcached_types.go
:
type MemcachedSpec struct {
// Size is the size of the memcached deployment
Size int32 `json:"size"`
}
type MemcachedStatus struct {
// Nodes are the names of the memcached pods
// +listType=set
Nodes []string `json:"nodes"`
}
NOTE: Comment directives, such as +listType=set, are necessary in certain situations to avoid API rule violations when generating OpenAPI files. See https://godoc.org/k8s.io/kube-openapi/pkg/idl to learn more.
After modifying the *_types.go
file always run the following command to update the generated code for that resource type:
$ operator-sdk generate k8s
OpenAPIv3 schemas are added to CRD manifests in the spec.validation
block when the manifests are generated. This validation block allows Kubernetes to validate the properties in a Memcached Custom Resource when it is created or updated. Additionally a pkg/apis/<group>/<version>/zz_generated.openapi.go
file is generated containing the Go representation of this validation block if the +k8s:openapi-gen=true
annotation is present above the kind type declaration (present by default). This auto-generated code is your Go kind type's OpenAPI model, from which you can create a full OpenAPI spec and generate a client. Check out this issue comment for steps on how to do so.
Markers (annotations) are available to configure validations for your API. These markers will always have a +kubebuilder:validation
prefix. For example, adding an enum type specification can be done by adding the following marker:
// +kubebuilder:validation:Enum=Lion;Wolf;Dragon
type Alias string
Usage of markers in API code is discussed in the kubebuilder CRD generation and marker documentation. A full list of OpenAPIv3 validation markers can be found here.
To update the OpenAPI validation section in the CRD deploy/crds/cache.example.com_memcacheds_crd.yaml
, run the following command:
$ operator-sdk generate openapi
Note: You may see errors like "API rule violation" when running the above command. For information on these errors see the API rules documentation
An example of the generated YAML is as follows:
spec:
validation:
openAPIV3Schema:
properties:
spec:
properties:
size:
format: int32
type: integer
To learn more about OpenAPI v3.0 validation schemas in Custom Resource Definitions, refer to the Kubernetes Documentation.
Add a new Controller to the project that will watch and reconcile the Memcached resource:
$ operator-sdk add controller --api-version=cache.example.com/v1alpha1 --kind=Memcached
This will scaffold a new Controller implementation under pkg/controller/memcached/...
.
For this example replace the generated Controller file pkg/controller/memcached/memcached_controller.go
with the example memcached_controller.go
implementation.
The example Controller executes the following reconciliation logic for each Memcached
CR:
- Create a memcached Deployment if it doesn't exist
- Ensure that the Deployment size is the same as specified by the
Memcached
CR spec - Update the
Memcached
CR status using the status writer with the names of the memcached pods
The next two subsections explain how the Controller watches resources and how the reconcile loop is triggered. Skip to the Build section to see how to build and run the operator.
Inspect the Controller implementation at pkg/controller/memcached/memcached_controller.go
to see how the Controller watches resources.
The first watch is for the Memcached type as the primary resource. For each Add/Update/Delete event the reconcile loop will be sent a reconcile Request
(a namespace/name key) for that Memcached object:
err := c.Watch(
&source.Kind{Type: &cachev1alpha1.Memcached{}}, &handler.EnqueueRequestForObject{})
The next watch is for Deployments but the event handler will map each event to a reconcile Request
for the owner of the Deployment. Which in this case is the Memcached object for which the Deployment was created. This allows the controller to watch Deployments as a secondary resource.
err := c.Watch(&source.Kind{Type: &appsv1.Deployment{}}, &handler.EnqueueRequestForOwner{
IsController: true,
OwnerType: &cachev1alpha1.Memcached{},
})
There are a number of useful configurations that can be made when initialzing a controller and declaring the watch parameters. For more details on these configurations consult the upstream controller godocs.
- Set the max number of concurrent Reconciles for the controller via the
MaxConcurrentReconciles
option. Defaults to 1._, err := controller.New("memcached-controller", mgr, controller.Options{ MaxConcurrentReconciles: 2, ... })
- Filter watch events using predicates
- Choose the type of EventHandler to change how a watch event will translate to reconcile requests for the reconcile loop. For operator relationships that are more complex than primary and secondary resources, the
EnqueueRequestsFromMapFunc
handler can be used to transform a watch event into an arbitrary set of reconcile requests.
Every Controller has a Reconciler object with a Reconcile()
method that implements the reconcile loop. The reconcile loop is passed the Request
argument which is a Namespace/Name key used to lookup the primary resource object, Memcached, from the cache:
func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) {
// Lookup the Memcached instance for this reconcile request
memcached := &cachev1alpha1.Memcached{}
err := r.client.Get(context.TODO(), request.NamespacedName, memcached)
...
}
Based on the return values, Result
and error, the Request
may be requeued and the reconcile loop may be triggered again:
// Reconcile successful - don't requeue
return reconcile.Result{}, nil
// Reconcile failed due to error - requeue
return reconcile.Result{}, err
// Requeue for any reason other than error
return reconcile.Result{Requeue: true}, nil
You can set the Result.RequeueAfter
to requeue the Request
after a grace period as well:
import "time"
// Reconcile for any reason than error after 5 seconds
return reconcile.Result{RequeueAfter: time.Second*5}, nil
Note: Returning Result
with RequeueAfter
set is how you can periodically reconcile a CR.
For a guide on Reconcilers, Clients, and interacting with resource Events, see the Client API doc.
Before running the operator, the CRD must be registered with the Kubernetes apiserver:
$ kubectl create -f deploy/crds/cache.example.com_memcacheds_crd.yaml
Once this is done, there are two ways to run the operator:
- As a Deployment inside a Kubernetes cluster
- As Go program outside a cluster
Note: operator-sdk build
invokes docker build
by default, and optionally buildah bud
. If using buildah
, skip to the operator-sdk build
invocation instructions below. If using docker
, make sure your docker daemon is running and that you can run the docker client without sudo. You can check if this is the case by running docker version
, which should complete without errors. Follow instructions for your OS/distribution on how to start the docker daemon and configure your access permissions, if needed.
Note: If a vendor/
directory is present, run
$ go mod vendor
before building the memcached-operator image.
Build the memcached-operator image and push it to a registry:
$ operator-sdk build quay.io/example/memcached-operator:v0.0.1
$ sed -i 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
$ docker push quay.io/example/memcached-operator:v0.0.1
Note
If you are performing these steps on OSX, use the following sed
command instead:
$ sed -i "" 's|REPLACE_IMAGE|quay.io/example/memcached-operator:v0.0.1|g' deploy/operator.yaml
The Deployment manifest is generated at deploy/operator.yaml
. Be sure to update the deployment image as shown above since the default is just a placeholder.
Setup RBAC and deploy the memcached-operator:
$ kubectl create -f deploy/service_account.yaml
$ kubectl create -f deploy/role.yaml
$ kubectl create -f deploy/role_binding.yaml
$ kubectl create -f deploy/operator.yaml
Verify that the memcached-operator is up and running:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
memcached-operator 1 1 1 1 1m
This method is preferred during development cycle to deploy and test faster.
Set the name of the operator in an environment variable:
export OPERATOR_NAME=memcached-operator
Run the operator locally with the default Kubernetes config file present at $HOME/.kube/config
:
$ operator-sdk up local --namespace=default
2018/09/30 23:10:11 Go Version: go1.10.2
2018/09/30 23:10:11 Go OS/Arch: darwin/amd64
2018/09/30 23:10:11 operator-sdk Version: 0.0.6+git
2018/09/30 23:10:12 Registering Components.
2018/09/30 23:10:12 Starting the Cmd.
You can use a specific kubeconfig via the flag --kubeconfig=<path/to/kubeconfig>
.
Create the example Memcached
CR that was generated at deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml
:
$ cat deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
name: "example-memcached"
spec:
size: 3
$ kubectl apply -f deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml
Ensure that the memcached-operator creates the deployment for the CR:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
memcached-operator 1 1 1 1 2m
example-memcached 3 3 3 3 1m
Check the pods and CR status to confirm the status is updated with the memcached pod names:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-memcached-6fd7c98d8-7dqdr 1/1 Running 0 1m
example-memcached-6fd7c98d8-g5k7v 1/1 Running 0 1m
example-memcached-6fd7c98d8-m7vn7 1/1 Running 0 1m
memcached-operator-7cc7cfdf86-vvjqk 1/1 Running 0 2m
$ kubectl get memcached/example-memcached -o yaml
apiVersion: cache.example.com/v1alpha1
kind: Memcached
metadata:
clusterName: ""
creationTimestamp: 2018-03-31T22:51:08Z
generation: 0
name: example-memcached
namespace: default
resourceVersion: "245453"
selfLink: /apis/cache.example.com/v1alpha1/namespaces/default/memcacheds/example-memcached
uid: 0026cc97-3536-11e8-bd83-0800274106a1
spec:
size: 3
status:
nodes:
- example-memcached-6fd7c98d8-7dqdr
- example-memcached-6fd7c98d8-g5k7v
- example-memcached-6fd7c98d8-m7vn7
Change the spec.size
field in the memcached CR from 3 to 4 and apply the change:
$ cat deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml
apiVersion: "cache.example.com/v1alpha1"
kind: "Memcached"
metadata:
name: "example-memcached"
spec:
size: 4
$ kubectl apply -f deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml
Confirm that the operator changes the deployment size:
$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
example-memcached 4 4 4 4 5m
Clean up the resources:
$ kubectl delete -f deploy/crds/cache.example.com_v1alpha1_memcached_cr.yaml
$ kubectl delete -f deploy/operator.yaml
$ kubectl delete -f deploy/role_binding.yaml
$ kubectl delete -f deploy/role.yaml
$ kubectl delete -f deploy/service_account.yaml
The operator's Manager supports the Core Kubernetes resource types as found in the client-go scheme package and will also register the schemes of all custom resource types defined in your project under pkg/apis
.
import (
"github.com/example-inc/memcached-operator/pkg/apis"
...
)
// Setup Scheme for all resources
if err := apis.AddToScheme(mgr.GetScheme()); err != nil {
log.Error(err, "")
os.Exit(1)
}
To add a 3rd party resource to an operator, you must add it to the Manager's scheme. By creating an AddToScheme()
method or reusing one you can easily add a resource to your scheme. An example shows that you define a function and then use the runtime package to create a SchemeBuilder
.
Call the AddToScheme()
function for your 3rd party resource and pass it the Manager's scheme via mgr.GetScheme()
.
Example:
import (
....
routev1 "github.com/openshift/api/route/v1"
)
func main() {
....
// Adding the routev1
if err := routev1.AddToScheme(mgr.GetScheme()); err != nil {
log.Error(err, "")
os.Exit(1)
}
....
// Setup all Controllers
if err := controller.AddToManager(mgr); err != nil {
log.Error(err, "")
os.Exit(1)
}
}
NOTES:
- After adding new import paths to your operator project, run
go mod vendor
if avendor/
directory is present in the root of your project directory to fulfill these dependencies. - Your 3rd party resource needs to be added before add the controller in
"Setup all Controllers"
.
To implement complex deletion logic, you can add a finalizer to your Custom Resource. This will prevent your Custom Resource from being deleted until you remove the finalizer (ie, after your cleanup logic has successfully run). For more information, see the official Kubernetes documentation on finalizers.
Example:
The following is a snippet from the controller file under pkg/controller/memcached/memcached_controller.go
const memcachedFinalizer = "finalizer.cache.example.com"
func (r *ReconcileMemcached) Reconcile(request reconcile.Request) (reconcile.Result, error) {
reqLogger := log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name)
reqLogger.Info("Reconciling Memcached")
// Fetch the Memcached instance
memcached := &cachev1alpha1.Memcached{}
err := r.client.Get(context.TODO(), request.NamespacedName, memcached)
if err != nil {
// If the resource is not found, that means all of
// the finalizers have been removed, and the memcached
// resource has been deleted, so there is nothing left
// to do.
if apierrors.IsNotFound(err) {
return reconcile.Result{}, nil
}
return reconcile.Result{}, fmt.Errorf("could not fetch memcached instance: %s", err)
}
...
// Check if the Memcached instance is marked to be deleted, which is
// indicated by the deletion timestamp being set.
isMemcachedMarkedToBeDeleted := memcached.GetDeletionTimestamp() != nil
if isMemcachedMarkedToBeDeleted {
if contains(memcached.GetFinalizers(), memcachedFinalizer) {
// Run finalization logic for memcachedFinalizer. If the
// finalization logic fails, don't remove the finalizer so
// that we can retry during the next reconciliation.
if err := r.finalizeMemcached(reqLogger, memcached); err != nil {
return reconcile.Result{}, err
}
// Remove memcachedFinalizer. Once all finalizers have been
// removed, the object will be deleted.
memcached.SetFinalizers(remove(memcached.GetFinalizers(), memcachedFinalizer))
err := r.client.Update(context.TODO(), memcached)
if err != nil {
return reconcile.Result{}, err
}
}
return reconcile.Result{}, nil
}
// Add finalizer for this CR
if !contains(memcached.GetFinalizers(), memcachedFinalizer) {
if err := r.addFinalizer(reqLogger, memcached); err != nil {
return reconcile.Result{}, err
}
}
...
return reconcile.Result{}, nil
}
func (r *ReconcileMemcached) finalizeMemcached(reqLogger logr.Logger, m *cachev1alpha1.Memcached) error {
// TODO(user): Add the cleanup steps that the operator
// needs to do before the CR can be deleted. Examples
// of finalizers include performing backups and deleting
// resources that are not owned by this CR, like a PVC.
reqLogger.Info("Successfully finalized memcached")
return nil
}
func (r *ReconcileMemcached) addFinalizer(reqLogger logr.Logger, m *cachev1alpha1.Memcached) error {
reqLogger.Info("Adding Finalizer for the Memcached")
m.SetFinalizers(append(m.GetFinalizers(), memcachedFinalizer))
// Update CR
err := r.client.Update(context.TODO(), m)
if err != nil {
reqLogger.Error(err, "Failed to update Memcached with finalizer")
return err
}
return nil
}
func contains(list []string, s string) bool {
for _, v := range list {
if v == s {
return true
}
}
return false
}
func remove(list []string, s string) []string {
for i, v := range list {
if v == s {
list = append(list[:i], list[i+1:]...)
}
}
return list
}
To learn about how metrics work in the Operator SDK read the metrics section of the user documentation.
During the lifecycle of an operator it's possible that there may be more than 1 instance running at any given time e.g when rolling out an upgrade for the operator. In such a scenario it is necessary to avoid contention between multiple operator instances via leader election so that only one leader instance handles the reconciliation while the other instances are inactive but ready to take over when the leader steps down.
There are two different leader election implementations to choose from, each with its own tradeoff.
- Leader-for-life: The leader pod only gives up leadership (via garbage collection) when it is deleted. This implementation precludes the possibility of 2 instances mistakenly running as leaders (split brain). However, this method can be subject to a delay in electing a new leader. For instance when the leader pod is on an unresponsive or partitioned node, the
pod-eviction-timeout
dictates how long it takes for the leader pod to be deleted from the node and step down (default 5m). - Leader-with-lease: The leader pod periodically renews the leader lease and gives up leadership when it can't renew the lease. This implementation allows for a faster transition to a new leader when the existing leader is isolated, but there is a possibility of split brain in certain situations.
By default the SDK enables the leader-for-life implementation. However you should consult the docs above for both approaches to consider the tradeoffs that make sense for your use case.
The following examples illustrate how to use the two options:
A call to leader.Become()
will block the operator as it retries until it can become the leader by creating the configmap named memcached-operator-lock
.
import (
...
"github.com/operator-framework/operator-sdk/pkg/leader"
)
func main() {
...
err = leader.Become(context.TODO(), "memcached-operator-lock")
if err != nil {
log.Error(err, "Failed to retry for leader lock")
os.Exit(1)
}
...
}
If the operator is not running inside a cluster leader.Become()
will simply return without error to skip the leader election since it can't detect the operator's namespace.
The leader-with-lease approach can be enabled via the Manager Options for leader election.
import (
...
"sigs.k8s.io/controller-runtime/pkg/manager"
)
func main() {
...
opts := manager.Options{
...
LeaderElection: true,
LeaderElectionID: "memcached-operator-lock"
}
mgr, err := manager.New(cfg, opts)
...
}
When the operator is not running in a cluster, the Manager will return an error on starting since it can't detect the operator's namespace in order to create the configmap for leader election. You can override this namespace by setting the Manager's LeaderElectionNamespace
option.