Extending Kubernetes: Create Controllers for Core and Custom Resources

原文地址:https://medium.com/@trstringer/create-kubernetes-controllers-for-core-and-custom-resources-62fc35ad64a3
感谢原作者的输出。由于原文访问比较困难,故复制粘贴到此,方便学习。


What is this?

This post can be broken down into sub-topics:

  • Controllers overview
  • Controller event flow
  • Controller with core resources
  • Controller with custom resources
  • Defining custom resources
  • Generating custom resource code
  • Wiring up the generated code to the controller
  • Creating Custom Resource Definitions
  • Running the controller

What is this NOT?

This post is not a discussion about when you should use custom resources (and controllers). This assumes that you are looking for the knowledge on how to create them, whether out of curiosity or a requirement.

For a really good summary of when you should or shouldn’t create custom resources and controllers, please refer to the official Kubernetes documentation on the topic.

Controllers overview

Kubernetes has a very “pluggable” way to add your own logic in the form of a controller. A controller is a component that you can develop and run in the context of a Kubernetes cluster.

Controllers are an essential part of Kubernetes. They are the “brains” behind the resources themselves. For instance, a Deployment resource for Kubernetes is tasked with making sure there is a certain amount of pods running. This logic can be found in the deployment controller (GitHub).

You can have a custom controller without a custom resource (e.g. custom logic on native resource types). Conversely, you can have custom resources without a controller, but that is a glorified data store with no custom logic behind it.

Controller event flow

Working backwards (as far as event flow goes), the controller “subscribes” to a queue. The controller worker is going to block on a call to get the next item from the queue.

An event is the combination of an action (create, update, or delete) and a resource key (typically in the format of namespace/name).

Before we talk about how the queue is populated for the controller, it is worth mentioning the idea of an informer. The informer is the “link” to the part of Kubernetes that is tasked with handing out these events, as well as retrieving the resources in the cluster to focus on. Put another way, the informer is the proxy between Kubernetes and your controller (and the queue is the store for it).

Part of the informer’s responsibility is to register event handlers for the three different types of events: Add, update, and delete. It is in those informer’s event handler functions that we add the key to the queue to pass off logic to the controller’s handlers.

See below for an illustration of the event flow…


image.png

Controller: Core resources

There are two types of resources that controllers can “watch”: Core resources and custom resources. Core resources are what Kubernetes ship with (for instance: Pods).

To work with core resources, when you define your informer you specify a few components…

  • ListWatch — the ListFunc and WatchFunc should be referencing native APIs to list and watch core resources
  • Controller handlers — the controller should take into account the type of resource that it expects to work with

In the case of the example, this informer (GitHub) is defined to list and watch pods…

// get the Kubernetes client for connectivity
    client := getKubernetesClient()

    // create the informer so that we can not only list resources
    // but also watch them for all pods in the default namespace
    informer := cache.NewSharedIndexInformer(
        // the ListWatch contains two different functions that our
        // informer requires: ListFunc to take care of listing and watching
        // the resources we want to handle
        &cache.ListWatch{
            ListFunc: func(options meta_v1.ListOptions) (runtime.Object, error) {
                // list all of the pods (core resource) in the deafult namespace
                return client.CoreV1().Pods(meta_v1.NamespaceDefault).List(options)
            },
            WatchFunc: func(options meta_v1.ListOptions) (watch.Interface, error) {
                // watch all of the pods (core resource) in the default namespace
                return client.CoreV1().Pods(meta_v1.NamespaceDefault).Watch(options)
            },
        },
        &api_v1.Pod{}, // the target type (Pod)
        0,             // no resync (period of 0)
        cache.Indexers{},
  )

This could just as easily been programmed to work with deployments, daemon sets, or any other core resource that ships with Kubernetes.

For a more detailed look into how a controller for core resources would work, please refer to the GitHub repo showing an example of this. A few things to note, I specifically wrote this code to be read as easily as possible. This includes everything in a single package, as well as extremely verbosely commented code. So hopefully it reads like a book! The significant source code files are…

  • main.go — this is the entry point for the controller as well as where everything is wired up. Start here
  • controller.go — the Controller struct and methods, and where all of the work is done as far as the controller loop is concerned
  • handler.go — the sample handler that the controller uses to take action on triggered events

Controller: Custom resources

Handling core resource events is interesting, and a great way to understand the basic mechanisms of controllers, informers, and queues. But the use-cases are limited. The real power and flexibility with controllers is when you can start working with custom resources.

You can think of custom resources as the data, and controllers as the logic behind the data. Working together, they are a significant component to extending Kubernetes.

The base components of our controller will remain mostly the same as when working with core resources: We will still have an informer, a queue, and the controller itself. But now we need to define the actual custom resource and inject that into the informer.

Define custom resource

When developing a custom resource (and controller) you will undoubtedly already a requirement. The first step in defining the custom resource is to figure out the following…

The API group name — in my case I’ll use trstringer.com but this can be whatever you want
The version — I’ll use “v1” for this custom resource but you are welcome to use any that you like. For some ideas of existing API versions in your existing Kubernetes cluster you can run kubectl api-versions. Some common ones are “v1”, “v1beta2”, “v2alpha1”
Resource name — how your resource will be individually identified. For my example I’ll use the resource name MyResource
Before we create the resource and necessary items, let’s first create the directory structure: $ mkdir -p pkg/apis/myresource/v1.

Create the group name const in a new file:

$ touch pkg/apis/myresource/register.go…
// +k8s:deepcopy-gen=package
// +groupName=trstringer.com

package v1

Like in types.go, we have a couple of comment tags for the code generator. When defined in doc.go for the package, these settings take effect for the whole package. Here we set deepcopy should be generated for all types in the package (unless otherwise turned off). And we tell the generator what the API group name is with the +groupName tag.

The client requires a particular API surface area for custom types, and the package needs to include AddToScheme and Resource. These functions handle adding types to the schemes. Create the source file for this functionality in the package: $ touch pkg/apis/myresource/v1/register.go…

package v1

import (
    meta_v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/runtime/schema"

    "github.com/trstringer/k8s-controller-core-resource/pkg/apis/myresource"
)

// GroupVersion is the identifier for the API which includes
// the name of the group and the version of the API
var SchemeGroupVersion = schema.GroupVersion{
    Group:   myresource.GroupName,
    Version: "v1",
}

// create a SchemeBuilder which uses functions to add types to
// the scheme
var AddToScheme = runtime.NewSchemeBuilder(addKnownTypes).AddToScheme

func Resource(resource string) schema.GroupResource {
    return SchemeGroupVersion.WithResource(resource).GroupResource()
}

// addKnownTypes adds our types to the API scheme by registering
// MyResource and MyResourceList
func addKnownTypes(scheme *runtime.Scheme) error {
    scheme.AddKnownTypes(
        SchemeGroupVersion,
        &MyResource{},
        &MyResourceList{},
    )

    // register the type in the scheme
    meta_v1.AddToGroupVersion(scheme, SchemeGroupVersion)
    return nil
}

At this point we should have all of the boilerplate to run the code generator to do a lot of the heavy lifting to create the client, informer, and lister code (as well as the deepcopy functionality that is required).

Run the code generator

There is a little bit of setup to run the code generator. I’ve included the shell commands below that you need to run. It’s the k8s.io/code-generator package that contains the generate-groups.sh shell script which we will use to do all of the heavy lifting (this shell script directly invokes the client-gen, informer-gen, and lister-gen bins).

# ROOT_PACKAGE :: the package (relative to $GOPATH/src) that is the target for code generation
ROOT_PACKAGE="github.com/trstringer/k8s-controller-core-resource"
# CUSTOM_RESOURCE_NAME :: the name of the custom resource that we're generating client code for
CUSTOM_RESOURCE_NAME="myresource"
# CUSTOM_RESOURCE_VERSION :: the version of the resource
CUSTOM_RESOURCE_VERSION="v1"

# retrieve the code-generator scripts and bins
go get -u k8s.io/code-generator/...
cd $GOPATH/src/k8s.io/code-generator

# run the code-generator entrypoint script
./generate-groups.sh all "$ROOT_PACKAGE/pkg/client" "$ROOT_PACKAGE/pkg/apis" "$CUSTOM_RESOURCE_NAME:$CUSTOM_RESOURCE_VERSION"

# view the newly generated files
tree $GOPATH/src/$ROOT_PACKAGE/pkg/client
# pkg/client/
# ├── clientset
# │   └── versioned
# │       ├── clientset.go
# │       ├── doc.go
# │       ├── fake
# │       │   ├── clientset_generated.go
# │       │   ├── doc.go
# │       │   └── register.go
# │       ├── scheme
# │       │   ├── doc.go
# │       │   └── register.go
# │       └── typed
# │           └── myresource
# │               └── v1
# │                   ├── doc.go
# │                   ├── fake
# │                   │   ├── doc.go
# │                   │   ├── fake_myresource_client.go
# │                   │   └── fake_myresource.go
# │                   ├── generated_expansion.go
# │                   ├── myresource_client.go
# │                   └── myresource.go
# ├── informers
# │   └── externalversions
# │       ├── factory.go
# │       ├── generic.go
# │       ├── internalinterfaces
# │       │   └── factory_interfaces.go
# │       └── myresource
# │           ├── interface.go
# │           └── v1
# │               ├── interface.go
# │               └── myresource.go
# └── listers
#     └── myresource
#         └── v1
#             ├── expansion_generated.go
#             └── myresource.go
# 
# 16 directories, 22 files

After running the code generator we now have generated code that handles a large array of functionality for our new resource. Now we need to tie a lot of loose ends together for our new resource.

Wire up the generated code

There are a couple of changes we need to make. First, in our helper function that gets the Kubernetes client, we need to now also return a an instance of a configured client that can interact with MyResource resources…

// retrieve the Kubernetes cluster client from outside of the cluster
func getKubernetesClient() (kubernetes.Interface, myresourceclientset.Interface) {
    // construct the path to resolve to `~/.kube/config`
    kubeConfigPath := os.Getenv("HOME") + "/.kube/config"

    // create the config from the path
    config, err := clientcmd.BuildConfigFromFlags("", kubeConfigPath)
    if err != nil {
        log.Fatalf("getClusterConfig: %v", err)
    }

    // generate the client based off of the config
    client, err := kubernetes.NewForConfig(config)
    if err != nil {
        log.Fatalf("getClusterConfig: %v", err)
    }

    myresourceClient, err := myresourceclientset.NewForConfig(config)
    if err != nil {
        log.Fatalf("getClusterConfig: %v", err)
    }

    log.Info("Successfully constructed k8s client")
    return client, myresourceClient
}

We also need to now store the custom resource client, and we can utilize the generated helper function to return an informer tailored to the custom resource…

func main() {
    // get the Kubernetes client for connectivity
    client, myresourceClient := getKubernetesClient()

    // retrieve our custom resource informer which was generated from
    // the code generator and pass it the custom resource client, specifying
    // we should be looking through all namespaces for listing and watching
    informer := myresourceinformer_v1.NewMyResourceInformer(
        myresourceClient,
        meta_v1.NamespaceAll,
        0,
        cache.Indexers{},
    )

    // ... remainder of main.main unchanged and omitted for brevity
}

Custom Resource Definition

Now that we’ve created the custom logic part of the custom resource (through the controller), we need to actually create the data part of our custom resource: the Custom Resource Definition.

I put my CRD in a separate dir at the root of the repo: mkdir crd. And then I create my definition: touch crd/myresource.yaml…

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  name: myresources.trstringer.com
spec:
  group: trstringer.com
  version: v1
  names:
    kind: MyResource
    plural: myresources
  scope: Namespaced

This should appear straightforward, as we’re using this CRD to define the API group, version, and name of the custom resource.

Create the CRD in your cluster by running $ kubectl apply -f crd/myresource.yaml.

The full code for this example can be found on this repo (GitHub).

Running the controller

To run the controller, in the root of the repo run $ go run *.go. And then in a separate shell, create an object that is of type MyResource. I did this by creating an example configuration in my root repo: $ mkdir example && touch example/example-myresource.yaml

apiVersion: trstringer.com/v1
kind: MyResource
metadata:
  name: example-myresource
spec:
  message: hello world
  someValue: 13

And then I created this in my cluster: $ kubectl apply -f example/example-myresource.yaml and the output from my controller logging shows that my custom controller did indeed pick up this create event for this resource (and could have handled it however it needed to be handled)…

image.png

Summary

Kubernetes is an exciting platform, and one of the really great features of it is the ability to extend it. The sky is the limit, and hopefully with this additional knowledge it’ll be easier to understand how controllers work and how to create your own.

Enjoy!

References

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 158,736评论 4 362
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 67,167评论 1 291
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 108,442评论 0 243
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 43,902评论 0 204
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 52,302评论 3 287
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 40,573评论 1 216
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 31,847评论 2 312
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 30,562评论 0 197
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 34,260评论 1 241
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 30,531评论 2 245
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 32,021评论 1 258
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 28,367评论 2 253
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 33,016评论 3 235
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 26,068评论 0 8
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 26,827评论 0 194
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 35,610评论 2 274
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 35,514评论 2 269

推荐阅读更多精彩内容