Skip to content

Commit

Permalink
update cilium speedup document
Browse files Browse the repository at this point in the history
  • Loading branch information
cyclinder committed Nov 21, 2024
1 parent 8b361d1 commit fd1f4aa
Show file tree
Hide file tree
Showing 8 changed files with 569 additions and 550 deletions.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
296 changes: 45 additions & 251 deletions docs/en/docs/network/modules/cilium/cilium-speedup.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,12 @@
---
MTPE: WANG0608GitHub
Date: 2024-08-13
---

# Cross-Cluster Application Communication
# Cilium Network Communication Acceleration

## Introduction

As microservices processes evolve, many enterprises choose to deploy multiple Kubernetes (K8s) clusters
in order to meet the needs of application isolation, high availability/disaster tolerance, and operations management.
However, such multicluster deployments pose a problem where some applications depend on microservices
in other K8s clusters and need to implement cross-cluster communication. Specifically, a pod in
one cluster needs to access a pod or Service in another cluster.
This page describes how to configure Cilium's communication acceleration capability. There are two optional configuration methods.

## Prerequisites

Please make sure the Linux Kernel version >= v4.9.17 with v5.10+ recommended. To view and install the latest version, you can do the following:
Please make sure the Linux Kernel version >= 4.9.17 with 5.10+ recommended. To view and install the latest version, you can do the following:

1. To view the current kernel version:

Expand Down Expand Up @@ -44,258 +35,61 @@ Please make sure the Linux Kernel version >= v4.9.17 with v5.10+ recommended. To
grub2-mkconfig -o /boot/grub2/grub.cfg
```

## Create Clusters

> For more information on creating clusters, see [Creating Clusters](../../../kpanda/user-guide/clusters/create-cluster.md).

1. Create two clusters with different names, cluster01 and cluster02.

![create-cluster1](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross1.png)
> Note that manually updating the kernel may be risky and should be done with caution in production environments.

- Choose Cilium as the CNI plugin for cluster01.
- Add two parameters, `cluster-id` and `cluster-name`.
- Use the default configuration for other items.
## First method:configure when creating the cluster

2. Follow the same steps to create cluster02.
1. Click `Container Management` --> `Clusters`. On the page of `Create Cluster`, enter the cluster's `basic information` and `node configuration`, then go to `Network Configuration` to configure as follows:
![cilium-speedup01](../../images/cilium_speedup001.png)
![Create cluster2](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross2.png)
- Select `cilium` for the cluster's CNI plugin

> The container and service segments used by the two clusters must not overlap. The values of
> the two parameters must not conflict to identify the clusters uniquely and avoid conflicts for cross-cluster communication.
- Add `other parameters` as follows:

## Create a Service for API Server
```yaml
# Auto direct node routes must be set to be true, otherwise cross-node traffic cannot be routed
cilium_auto_direct_node_routes: "true"
# If masquerading is used, it will replace the iptables implementation based on eBPF.
# Require kernel 5.10 and later.
# Otherwise it will be downgraded to the iptables implementation even if enabled
cilium_enable_bpf_masquerade: "true"
# When doing source address translation for Pod access to outside traffic, enable it if using tunnel mode.
# Disable it if BGP is used to connect to the physical network.
cilium_enable_ipv6_masquerade: "false"
# Disable the ability for hosts to bypass their kernel stack when processing packets to speed up data forwarding.
# Enable it by default, but fallback to legacy behavior if the host kernel does not support it.
cilium_enable_host_legacy_routing: "false"
# Turn on bandwidth-manager to improve the performance of tcp, udp
cilium_enable_bandwidth_manager: "true"
# Kube-proxy replacement feature can be enabled after removing kube-proxy component
cilium_kube_proxy_replacement: strict
# Disable tunnel mode
cilium_tunnel_mode: disabled
# (optional) bbr network blocking control, with the requirement of kernel > 5.18
cilium_enable_bbr: "true"
```

1. After the cluster is created, create a Service on each of the two clusters to expose API server for that cluster.
- Use the default ConfigMaps for everything else

![create service](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross3.png)
1. Click `Create Cluster` to complete the creation.

![Create service](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross4.png)
## Second method: modify Cilium ConfigMaps

- Choose NodePort as the access type for external access for cluster01.
- Choose kube-system as the namespace of API Server.
- Use label selectors to filter API Server components, allowing you to view the selectors associated with the API Server.
- Configure the access port of the Service, and the container port is 6443.
- Get the external access link for the Service.
If the cluster has been created and you need to enable acceleration parameters, you can modify the `Cilium-Config` file.

2. Create a Service for API Server on cluster02 in the same way.
Click `Container Management`--> `Clusters`. Go to the created cluster and click `ConfigMaps & Secrets`. Select `Config Items`, find `Cilium-config`, and then click Edit to enter the following acceleration parameters:

![create service](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross3.png)
![cilium-sppedup02](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-speedup2.png)

![Create service](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross5.png)
![speed-up03](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-speedup3.png)

## Modify cluster configuration
Once the parameters are written, Check if the following ENV exists for cilium agent Daemonset (containers for cilium-agent), if not, you need to add it manually:

Edit the `kubeconfig` files for cluster01 and cluster02 through the `vi` command.

```bash
vi $HOME/.kube/config
```
- name: KUBERNETES_SERVICE_HOST
value: <YOUR_K8S_APISERVER_IP
- name: KUBERNETES_SERVICE_PORT
value: “6443”
```

1. Add new `cluster`, `context`, and `user` information to both cluster01 and cluster02.

- Under `clusters`, add new `cluster` information: the original CA for both clusters remains unchanged;
the new `server` address is changed to the address of the API Server Service that you have created above;
and the `name` is changed to the names of the two clusters themselves, namely cluster01 and cluster02.

> The address of the API Server Service can be found or copied from the DCE5.0 page, which requires to use the https protocol.

- Add new `context` information to `contexts`: change the values of the `name`, `user`, and `cluster` fields
for the clusters in `context` to the names of the two clusters themselves, namely cluster01 and cluster02.

- Add new `user` information to `users`: the two clusters copy their original credential
and change the name to the names of the two clusters namely cluster01 and cluster02.

2. Add the `cluster`, `context`, and `user` information to each other's clusters.
The following is a yaml example of how to do this:
```yaml
clusters:
- cluster: #Add the cluster01's `cluster` information
certificate-authority-data: {{cluster01}}
server: https://{{https://10.6.124.66:31936}}
name: {{cluster01 }}
- cluster: #Add the cluster02's `cluster` information
certificate-authority-data: {{cluster02}}
server: https://{{https://10.6.124.67:31466}}
name: {{cluster02}}
```
```yaml
contexts:
- context: #Add the cluster01's `context` information
cluster: {{cluster01 name}}
user: {{cluster01 name}}
name: {{cluster01 name}}
- context: #Add the cluster02's `context` information
cluster: {{cluster02 name}}
user: {{cluster02 name}}
name: {{cluster02 name}}
current-context: [email protected]
```
```yaml
users:
- name: {{cluster01}} #Add the cluster01's `user` information
user:
client-certificate-data: {{cluster01 certificate-data}}
client-key-data: {{cluster01 key-data}}
- name: {{cluster02}} #Add the cluster02's `user` information
user:
client-certificate-data: {{cluster02 certificate-data}}
client-key-data: {{cluster02 key-data}}
```
## Configure cluster connectivity
Run the following commands to verify cluster connectivity:
1. Run the following commands on cluster01:
```bash
cilium clustermesh enable --create-ca --context cluster01 --service-type NodePort
```
2. Run the following command to enable `clustermesh` on cluster02:
```bash
cilium clustermesh enable --create-ca --context cluster02 --service-type NodePort
```
3. Establish connectivity on cluster01:
```bash
cilium clustermesh connect --context cluster01 --destination-context cluster02
```
4. The presence of both `connected cluster1 and cluster2!` on cluster01 and `ClusterMesh enabled!`
on cluster02 indicates that both clusters are connected.
![connect](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/network-cross-cluster7.png)
![connect](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/network-cross-cluster8.png)
## Create a demo application
1. Use the [rebel-base](https://github.com/cilium/cilium/blob/main/examples/kubernetes/clustermesh/global-service-example/cluster1.yaml) application provided in the Cilium docs, and copy the following yaml file:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rebel-base
spec:
selector:
matchLabels:
name: rebel-base
replicas: 2
template:
metadata:
labels:
name: rebel-base
spec:
containers:
- name: rebel-base
image: docker.io/nginx:1.15.8
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html/
livenessProbe:
httpGet:
path: /
port: 80
periodSeconds: 1
readinessProbe:
httpGet:
path: /
port: 80
volumes:
- name: html
configMap:
name: rebel-base-response
items:
- key: message
path: index.html
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rebel-base-response
data:
message: "{\"Galaxy\": \"Alderaan\", \"Cluster\": \"Cluster-1\"}\n" # Change Cluster-1 to the name of Cluster01
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: x-wing
spec:
selector:
matchLabels:
name: x-wing
replicas: 2
template:
metadata:
labels:
name: x-wing
spec:
containers:
- name: x-wing-container
image: quay.io/cilium/json-mock:v1.3.3@sha256:f26044a2b8085fcaa8146b6b8bb73556134d7ec3d5782c6a04a058c945924ca0
livenessProbe:
exec:
command:
- curl
- -sS
- -o
- /dev/null
- localhost
readinessProbe:
exec:
command:
- curl
- -sS
- -o
- /dev/null
- localhost
```
2. Quickly create two applications for cluster01 and cluster02 in DCE 5.0 using yaml file.
![Create Application](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross9.png)
Modify the contents of `ConfigMap` so that the data returned is labeled with the names of cluster01
and cluster02, respectively when you access a Service in cluster01 and cluster02. The pod labels
can be found in the `rebel-base` application.
3. Create a Service for a global service video in each of the two clusters, which points to the created
`rebel-base` application.
![Create service application](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross10.png)
![Create service application](https://docs.daocloud.io/daocloud-docs-images/docs/en/docs/network/images/cilium-cross10.png)
- Service type is ClusterIP
- Add the application pod labels to filter the proper application
- Configure the port
- Add an annotation to make the current Service effective globally.
> When creating a service for cluster02, the service name must be the same for both clusters,
> The two clusters must locate in the same namespace, and have the same port name and global annotation.
## Cross-cluster communication
1. Check the pod IP of the application in cluster02.
<!-- add image later -->
2. On the page of cluster01 details, click __Pod__ -> __Console__ of rebel-base , and then curl the Pod
IP of cluster02's rebel-baseand, and successfully return the information from cluster02 indicating that the pods in two
clusters can communicate with each other.
<!-- add image later -->
3. Check the service name of cluster01. Click __Pod__ -> __Console__ of rebel-base in cluster02,
then curl the proper service name of cluster01. Some of the returned content is from cluster01,
which means that the pods and Services in the two clusters can also communicate with each other.
<!-- add image later -->
Restart the Cilium agent pods after adding them.
Loading

0 comments on commit fd1f4aa

Please sign in to comment.