Rancher
Rancher 官方文档 https://docs.rancher.cn/
什么是 Rancher
Rancher是一套容器管理平台,它可以帮助组织在生产环境中轻松快捷的部署和管理容器。 Rancher可以轻松地管理各种环境的Kubernetes,满足IT需求并为DevOps团队提供支持。
Kubernetes不仅已经成为的容器编排标准,它也正在迅速成为各类云和虚拟化厂商提供的标准基础架构。Rancher用户可以选择使用Rancher Kubernetes Engine(RKE)创建Kubernetes集群,也可以使用GKE,AKS和EKS等云Kubernetes服务。 Rancher用户还可以导入和管理现有的Kubernetes集群。
Rancher支持各类集中式身份验证系统来管理Kubernetes集群。例如,大型企业的员工可以使用其公司Active Directory凭证访问GKE中的Kubernetes集群。IT管理员可以在用户,组,项目,集群和云中设置访问控制和安全策略。 IT管理员可以在单个页面对所有Kubernetes集群的健康状况和容量进行监控。
Rancher为DevOps工程师提供了一个直观的用户界面来管理他们的服务容器,用户不需要深入了解Kubernetes概念就可以开始使用Rancher。 Rancher包含应用商店,支持一键式部署Helm和Compose模板。Rancher通过各种云、本地生态系统产品认证,其中包括安全工具,监控系统,容器仓库以及存储和网络驱动程序。下图说明了Rancher在IT和DevOps组织中扮演的角色。每个团队都会在他们选择的公共云或私有云上部署应用程序。
Rancher 架构
大多数Rancher2.0软件运行在Rancher Server节点上,Rancher Server包括用于管理整个Rancher部署的所有组件。
下图说明了Rancher2.0 的运行架构。该图描绘了管理两个Kubernetes集群的Rancher server安装:
Rancher 组件介绍
Rancher API服务器
集群控制 和 Agent
认证代理
Rancher 页面模块
安装 Rancher
系统需求
系统 | 版本 | docker型号 |
---|
CentOS | 7.5, 7.6, 7.7 | Docker 17.03.2, 18.06.2, 18.09.x, 19.03.x |
Oracle Linux | 7.6 | Docker 19.03.x |
RancherOS | 1.5.4 | Docker 17.03.2, 18.06.2, 18.09.x (up to 18.09.8), 19.03.x |
RHEL | 7.5, 7.6, 7.7 | RHEL Docker 1.13.x Docker 17.03.2, 18.06.2, 18.09.x, 19.03.x |
Ubuntu | 16.04, 18.04 | Docker 17.03.2, 18.06.2, 18.09.x, 19.03.x |
Windows Server | 1809, 1903 | Docker 19.03.x EE |
Kubernetes 版本
系统 | 版本 | 组件 |
---|
kubernetes | v1.17.0+ | etcd: v3.4.3 flannel: v0.11.0 canal: v3.10.2 nginx-ingress-controller: 0.25.1 |
kubernetes | v1.16.3+ | etcd: v3.3.15 flannel: v0.11.0 canal: v3.8.1 nginx-ingress-controller: 0.25.1 |
kubernetes | v1.15.6+ | etcd: v3.3.10 flannel: v0.11.0 canal: v3.7.4 nginx-ingress-controller: 0.25.1 |
单节点 Rancher
1
2
3
4
5
6
7
8
9
10
11
12
13
| # 启动
docker run -d \
--name=rancher \
--restart=unless-stopped \
-v /opt/rancher/auditlog:/var/log/auditlog \
-e AUDIT_LEVEL=3 \
-e AUDIT_LOG_PATH=/var/log/auditlog/rancher-api-audit.log \
-e AUDIT_LOG_MAXAGE=20 \
-e AUDIT_LOG_MAXBACKUP=20 \
-e AUDIT_LOG_MAXSIZE=100 \
-p 80:80 -p 443:443 \
rancher/rancher
|
多节点 Rancher
多节点是基于 Kubernetes 集群部署 Rancher
由于我只有一台服务器所以k8s 只有一个节点,既是 Master 又是 node
kubectl taint node localhost node-role.kubernetes.io/master-
master 调度 pods
1
2
3
| [root@localhost rancher]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost Ready master 17h v1.17.4
|
部署 Helm v3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| mkdir /opt/helm
cd /opt/helm
wget https://get.helm.sh/helm-v3.1.2-linux-amd64.tar.gz
tar zxvf helm-v3.1.2-linux-amd64.tar.gz
linux-amd64/
linux-amd64/helm
linux-amd64/README.md
linux-amd64/LICENSE
mv linux-amd64/ bin
|
1
2
3
4
5
| # 测试
[root@localhost ~]# /opt/helm/bin/helm version
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
|
1
2
3
4
5
6
7
8
| # 添加 rancher 的源
/opt/helm/bin/helm repo add rancher-stable \
https://releases.rancher.com/server-charts/stable
# 输出如下:
"rancher-stable" has been added to your repositories
|
部署 cert-manager
- 部署 cert-manager 用于 rancher 的ssl 证书
1
2
| kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.14.0/cert-manager.yaml
|
1
2
3
4
5
6
| [root@localhost rancher]# kubectl get pods -n cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-75f6cdcb64-ng7xp 1/1 Running 0 7m47s
cert-manager-cainjector-79788689f9-6gq8m 1/1 Running 0 7m47s
cert-manager-webhook-5b6c798c9-r6bkz 1/1 Running 0 7m47s
|
导出 rancher YAML 文件
安装 rancher
渲染 yaml 文件
- 这里由于我不太使用
helm
所以我现在重新渲染成完整的 yaml 。
使用 cert-manager
1
2
3
4
5
6
7
8
9
10
11
12
13
14
| # 创建 namespaces , 这里 cattle-system 会存放后续 rancher 的 agent 等项目
kubectl create namespace cattle-system
# 安装 rancher 使用 cert-manager 做为证书
/opt/helm/bin/helm install rancher rancher-stable/rancher \
--namespace cattle-system \
--set hostname=rancher.jicki.cn \
--debug --dry-run
|
- clusterRoleBinding.yaml 文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rancher
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
subjects:
- kind: ServiceAccount
name: rancher
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
|
1
2
3
4
5
6
7
8
9
10
| kind: ServiceAccount
apiVersion: v1
metadata:
name: rancher
namespace: cattle-system
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
| kind: Deployment
apiVersion: apps/v1
metadata:
name: rancher
namespace: cattle-system
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
spec:
replicas: 3
selector:
matchLabels:
app: rancher
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: rancher
release: rancher
spec:
serviceAccountName: rancher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rancher
topologyKey: kubernetes.io/hostname
containers:
- image: rancher/rancher:v2.3.5
imagePullPolicy: IfNotPresent
name: rancher
ports:
- containerPort: 80
protocol: TCP
args:
# Public trusted CA - clear ca certs
- "--http-listen-port=80"
- "--https-listen-port=443"
- "--add-local=auto"
env:
- name: CATTLE_NAMESPACE
value: cattle-system
- name: CATTLE_PEER_SERVICE
value: rancher
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 30
resources:
{}
volumeMounts:
volumes:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
| apiVersion: v1
kind: Service
metadata:
name: rancher
namespace: cattle-system
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: rancher
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rancher
namespace: cattle-system
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
annotations:
cert-manager.io/issuer: rancher
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
spec:
rules:
- host: rancher.jicki.cn # hostname to access rancher server
http:
paths:
- backend:
serviceName: rancher
servicePort: 80
tls:
- hosts:
- rancher.jicki.cn
secretName: tls-rancher-ingress
|
1
2
3
4
5
6
7
8
9
10
11
12
13
| apiVersion: cert-manager.io/v1alpha2
kind: Issuer
metadata:
name: rancher
namespace: cattle-system
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
spec:
ca:
secretName: tls-rancher
|
1
2
3
4
5
6
7
8
| [root@localhost rancher]# kubectl apply -f .
clusterrolebinding.rbac.authorization.k8s.io/rancher created
deployment.apps/rancher created
ingress.extensions/rancher created
issuer.cert-manager.io/rancher created
service/rancher created
serviceaccount/rancher created
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
| [root@localhost rancher]# kubectl get pods,svc,ingress,deployment -n cattle-system
NAME READY STATUS RESTARTS AGE
pod/rancher-7689c58786-7gvsk 1/1 Running 1 44m
pod/rancher-7689c58786-fllcn 1/1 Running 2 44m
pod/rancher-7689c58786-vgk75 1/1 Running 2 44m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/rancher ClusterIP 10.254.54.9 <none> 80/TCP 44m
NAME HOSTS ADDRESS PORTS AGE
ingress.extensions/rancher rancher.jicki.cn 80 44m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/rancher 3/3 3 3 44m
|
使用 自签 证书
1
2
3
4
| # helm v3
# 创建 namespaces , 这里 cattle-system 会存放后续 rancher 的 agent 等项目
kubectl create namespace cattle-system
|
1
2
3
4
5
6
7
8
9
|
# 安装 rancher 使用 自签证书
/opt/helm/bin/helm install rancher rancher-stable/rancher \
--namespace cattle-system \
--set hostname=rancher.jicki.cn \
--set ingress.tls.source=secret \
--debug --dry-run
|
- clusterRoleBinding.yaml 文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rancher
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
subjects:
- kind: ServiceAccount
name: rancher
namespace: cattle-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
|
1
2
3
4
5
6
7
8
9
| kind: ServiceAccount
apiVersion: v1
metadata:
name: rancher
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
| kind: Deployment
apiVersion: apps/v1
metadata:
name: rancher
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
spec:
replicas: 3
selector:
matchLabels:
app: rancher
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: rancher
release: rancher
spec:
serviceAccountName: rancher
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- rancher
topologyKey: kubernetes.io/hostname
containers:
- image: rancher/rancher:v2.3.5
imagePullPolicy: IfNotPresent
name: rancher
ports:
- containerPort: 80
protocol: TCP
args:
# Public trusted CA - clear ca certs
- "--no-cacerts"
- "--http-listen-port=80"
- "--https-listen-port=443"
- "--add-local=auto"
env:
- name: CATTLE_NAMESPACE
value: cattle-system
- name: CATTLE_PEER_SERVICE
value: rancher
livenessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 60
periodSeconds: 30
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 30
resources:
{}
volumeMounts:
volumes:
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
| apiVersion: v1
kind: Service
metadata:
name: rancher
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: rancher
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
| apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: rancher
labels:
app: rancher
chart: rancher-2.3.5
heritage: Helm
release: rancher
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
spec:
rules:
- host: rancher.jicki.cn # hostname to access rancher server
http:
paths:
- backend:
serviceName: rancher
servicePort: 80
tls:
- hosts:
- rancher.jicki.cn
secretName: tls-rancher-ingress
|
1
2
3
4
5
6
7
| [root@k8s-node-1 rancher]# kubectl apply -f . -n cattle-system
clusterrolebinding.rbac.authorization.k8s.io/rancher unchanged
deployment.apps/rancher created
ingress.extensions/rancher created
service/rancher created
serviceaccount/rancher created
|
配置证书
- 关于 阿里云 ssl 证书 这里下载 Nginx 的证书压缩包
_.key
文件是证书的私钥文件。_.pem
文件是证书文件,一般包含两段内容。(public.crt + chain.crt)
导入证书到 k8s
1
2
3
4
| kubectl -n cattle-system create \
secret tls tls-rancher-ingress \
--cert=3258931_rancher.jicki.cn.pem \
--key=3258931_rancher.jicki.cn.key
|
配置负载均衡入口 (可选)
因为我这里只有一台服务器使用的是 cert-manager 自签的证书,所以直接走的 ingress
线上购买的 ssl 签名, 需自行导入证书以及配置 nginx 或 slb 的证书。
可使用 nginx 或者 slb 等
配置 Nginx 作为代理
{% raw %}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
| # 创建配置目录
mkdir -p /etc/nginx
# 写入代理配置
cat << EOF >> /etc/nginx/nginx.conf
worker_processes 2;
worker_rlimit_nofile 40000;
events {
worker_connections 8192;
}
http {
# Gzip Settings
gzip on;
gzip_disable "msie6";
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
gzip_vary on;
gzip_static on;
gzip_proxied any;
gzip_min_length 0;
gzip_comp_level 8;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types
text/xml application/xml application/atom+xml application/rss+xml application/xhtml+xml image/svg+xml application/font-woff
text/javascript application/javascript application/x-javascript
text/x-json application/json application/x-web-app-manifest+json
text/css text/plain text/x-component
font/opentype application/x-font-ttf application/vnd.ms-fontobject font/woff2
image/x-icon image/png image/jpeg;
upstream rancher {
server IP_NODE_1:80;
server IP_NODE_2:80;
server IP_NODE_3:80;
}
map $http_upgrade $connection_upgrade {
default Upgrade;
'' close;
}
server {
listen 443 ssl http2; # 如果是升级或者全新安装v2.2.2,需要禁止http2,其他版本不需修改。
server_name rancher.jicki.cn;
ssl_certificate tls.crt;
ssl_certificate_key tls.key;
location / {
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://rancher;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# This allows the ability for the execute shell window to remain open for up to 15 minutes.
## Without this parameter, the default is 1 minute and will automatically close.
proxy_read_timeout 900s;
proxy_buffering off;
}
}
server {
listen 80;
server_name rancher.jicki.cn;
return 301 https://$server_name$request_uri;
}
}
EOF
|
1
| chmod +r /etc/nginx/nginx.conf
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
| cat << EOF >> /etc/systemd/system/nginx-proxy.service
[Unit]
Description=kubernetes apiserver docker wrapper
Wants=docker.socket
After=docker.service
[Service]
User=root
PermissionsStartOnly=true
ExecStart=/usr/bin/docker run -p 127.0.0.1:80:80 \\
-p 127.0.0.1:443:443 \\
-v /etc/nginx:/etc/nginx \\
-v /etc/nginx/ssl:/etc/nginx/ssl \\
--name nginx-proxy \\
--net=host \\
--restart=on-failure:5 \\
--memory=512M \\
nginx:alpine
ExecStartPre=-/usr/bin/docker rm -f nginx-proxy
ExecStop=/usr/bin/docker stop nginx-proxy
Restart=always
RestartSec=15s
TimeoutStartSec=30s
[Install]
WantedBy=multi-user.target
EOF
|
{% endraw %}
1
2
3
4
| systemctl daemon-reload
systemctl start nginx-proxy
systemctl enable nginx-proxy
systemctl status nginx-proxy
|
访问 Rancher
k8s 集群服务的配置
- 我这里只有一台服务器, 所以都是基于rancher部署的那个导入的k8s集群。
清除 rancher
1
2
3
4
| # 删除 yaml 文件
kubectl delete -f .
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
| # 删除命名空间
# cattle-system
kubectl patch namespace cattle-system -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-system --grace-period=0 --force
# cattle-global-data
kubectl patch namespace cattle-global-data -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-global-data --grace-period=0 --force
# cattle-global-nt
kubectl patch namespace cattle-global-nt -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-global-nt --grace-period=0 --force
# local
kubectl patch namespace local -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
for resource in `kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -o name -n local`; do kubectl patch $resource -p '{"metadata": {"finalizers": []}}' --type='merge' -n local; done
kubectl delete namespace local --grace-period=0 --force
# 其他相关 namespace, 具体名字根据自己环境 get namespace 查询
p-nw6f6 Active 46m
p-s8vtd Active 46m
user-6jntr Active 43m
kubectl delete namespace p-nw6f6 --grace-period=0 --force
kubectl delete namespace p-s8vtd --grace-period=0 --force
kubectl delete namespace user-6jntr --grace-period=0 --force
kubectl patch namespace p-nw6f6 -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl patch namespace p-s8vtd -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl patch namespace user-6jntr -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
|