MicroK8S实现

snow chuai汇总、整理、撰写---2020/08/15


1. 安装MicroK8S
1) 安装Snappy
[root@srv1 ~]# yum --enablerepo=epel install snapd -y
[root@srv1 ~]# ln -s /var/lib/snapd/snap /snap
[root@srv1 ~]# echo 'export PATH=$PATH:/var/lib/snapd/snap/bin' > /etc/profile.d/snap.sh
[root@srv1 ~]# source /etc/profile.d/snap.sh
[root@srv1 ~]# systemctl enable --now snapd.service snapd.socket
2) 安装MicroK8S (1) 安装最新版本的MicroK8S [root@srv1 ~]# snap install microk8s --classic 2020-08-15T02:34:42+08:00 INFO Waiting for automatic snapd restart... microk8s v1.18.6 from Canonical✓ installed
(2) 安装指定版本(v1.17)的MicroK8S [root@srv1 ~]# snap info microk8s name: microk8s summary: Lightweight Kubernetes for workstations and appliances publisher: Canonical✓ store-url: https://snapcraft.io/microk8s contact: https://github.com/ubuntu/microk8s license: Apache-2.0 description: | MicroK8s is the smallest, simplest, pure production Kubernetes for clusters, laptops, IoT and Edge, on Intel and ARM. One command installs a single-node K8s cluster with carefully selected add-ons on Linux, Windows and macOS. MicroK8s requires no configuration, supports automatic updates and GPU acceleration. Use it for offline development, prototyping, testing, to build your CI/CD pipeline or your IoT apps. snap-id: EaXqgt1lyCaxKaQCU349mlodBkDCXRcg channels: latest/stable: v1.18.6 2020-07-25 (1551) 215MB classic latest/candidate: v1.19.0 2020-08-27 (1634) 214MB classic latest/beta: v1.19.0 2020-08-27 (1634) 214MB classic latest/edge: v1.19.0 2020-08-28 (1641) 214MB classic dqlite/stable: – dqlite/candidate: – dqlite/beta: – dqlite/edge: v1.16.2 2019-11-07 (1038) 189MB classic 1.19/stable: v1.19.0 2020-08-27 (1637) 214MB classic 1.19/candidate: v1.19.0 2020-08-27 (1637) 214MB classic 1.19/beta: v1.19.0 2020-08-27 (1637) 214MB classic 1.19/edge: v1.19.0 2020-08-28 (1642) 214MB classic 1.18/stable: v1.18.8 2020-08-25 (1609) 201MB classic 1.18/candidate: v1.18.8 2020-08-17 (1609) 201MB classic 1.18/beta: v1.18.8 2020-08-17 (1609) 201MB classic 1.18/edge: v1.18.8 2020-08-13 (1609) 201MB classic 1.17/stable: v1.17.11 2020-08-25 (1608) 179MB classic 1.17/candidate: v1.17.11 2020-08-21 (1608) 179MB classic 1.17/beta: v1.17.11 2020-08-21 (1608) 179MB classic 1.17/edge: v1.17.11 2020-08-13 (1608) 179MB classic ...... ...... 1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic 1.10/candidate: ↑ 1.10/beta: ↑ 1.10/edge: ↑
[root@srv1 ~]# snap install node --channel=1.17/stable --classic
# 以下为输出内容 3) 查看MicroK8S的当前状态 [root@srv1 ~]# microk8s status microk8s is running addons: ambassador: disabled cilium: disabled dashboard: disabled dns: disabled fluentd: disabled gpu: disabled helm: disabled helm3: disabled host-access: disabled ingress: disabled istio: disabled jaeger: disabled knative: disabled kubeflow: disabled linkerd: disabled metallb: disabled metrics-server: disabled multus: disabled prometheus: disabled rbac: disabled registry: disabled storage: disabled
4) 显示当前配置 [root@srv1 ~]# microk8s config apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR...... server: https://192.168.10.11:16443 name: microk8s-cluster contexts: - context: cluster: microk8s-cluster user: admin name: microk8s current-context: microk8s kind: Config preferences: {} users: - name: admin user: token: SmoyTEI0WW93RHJ6TXYzVmpRRjE1N2lzbWFuQUhDVEZyZTVRQUtQTzR5QT0K
[root@srv1 ~]# microk8s kubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 86s
[root@srv1 ~]# microk8s kubectl get nodes NAME STATUS ROLES AGE VERSION srv1.1000y.cloud Ready <none> 103s v1.18.6-1+64f53401f200a7
5) 停止或启动MicroK8S [root@srv1 ~]# microk8s stop Stopped.
[root@srv1 ~]# microk8s status microk8s is not running. Use microk8s inspect for a deeper inspection.
[root@srv1 ~]# microk8s start Started.
[root@srv1 ~]# snap disable microk8s microk8s disabled
[root@srv1 ~]# snap enable microk8s microk8s enabled
2. 部署Pods
1) 安装Docker并配置加速器
[root@srv1 ~]# yum install docker -y
[root@srv1 ~]# vim /etc/docker/daemon.json { "registry-mirrors": ["https://3laho3y3.mirror.aliyuncs.com"] }
[root@srv1 ~]# systemctl enable --now docker 2) 避墙方法---解决无法访问k8s.gcr.io [root@srv1 ~]# vim images-pull.sh #! /bin/bash
images=( k8s.gcr.io/pause:3.1=mirrorgooglecontainers/pause-amd64:3.1 gcr.io/google_containers/defaultbackend-amd64:1.4=mirrorgooglecontainers/defaultbackend-amd64:1.4 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1=registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3=registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-amd64:v1.5.2=registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64:v1.5.2 k8s.gcr.io/heapster-grafana-amd64:v4.4.3=registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/metrics-server-amd64:v0.3.6=mirrorgooglecontainers/metrics-server-amd64:v0.3.6 k8s.gcr.io/fluentd-elasticsearch:v2.2.0=mirrorgooglecontainers/fluentd-elasticsearch:v2.2.0 )
OIFS=$IFS;
for image in ${images[@]};do IFS='=' set $image docker pull $2 docker tag $2 $1 docker rmi $2 docker save $1 > 1.tar && microk8s.ctr --namespace k8s.io image import 1.tar && rm 1.tar IFS=$OIFS; done
docker pull coredns/coredns:1.6.6 docker pull kubernetesui/dashboard:v2.0.0 docker pull kubernetesui/metrics-scraper:v1.0.4 docker pull docker.io/cdkbot/hostpath-provisioner-amd64:1.0.0 docker pull nginx docker pull cdkbot/registry-amd64:2.6

[root@srv1 ~]# chmod 700 images-pull.sh
[root@srv1 ~]# ./images-pull.sh
[root@srv1 ~]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/nginx latest 4bb46517cac3 17 hours ago 137 MB docker.io/kubernetesui/dashboard v2.0.0 8b32422733b3 3 months ago 225 MB docker.io/kubernetesui/metrics-scraper v1.0.4 86262685d9ab 4 months ago 37 MB docker.io/coredns/coredns 1.6.6 cc4d8e8c6169 8 months ago 41 MB docker.io/mirrorgooglecontainers/metrics-server-amd64 v0.3.6 9dd718864ce6 10 months ago 41.2 MB k8s.gcr.io/metrics-server-amd64 v0.3.6 9dd718864ce6 10 months ago 41.2 MB k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.1 f9aed6605b81 20 months ago 122 MB k8s.gcr.io/fluentd-elasticsearch v2.2.0 3e3172353877 2 years ago 147 MB docker.io/cdkbot/hostpath-provisioner-amd64 1.0.0 dc1b767a6407 2 years ago 41.7 MB docker.io/cdkbot/registry-amd64 2.6 fbb9478e00d7 2 years ago 151 MB k8s.gcr.io/heapster-amd64 v1.5.2 b2d460f2d2b9 2 years ago 75.3 MB k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 747 kB gcr.io/google_containers/defaultbackend-amd64 1.4 846921f0fe0e 2 years ago 4.85 MB docker.io/mirrorgooglecontainers/defaultbackend-amd64 1.4 846921f0fe0e 2 years ago 4.85 MB k8s.gcr.io/heapster-influxdb-amd64 v1.3.3 577260d221db 2 years ago 12.8 MB k8s.gcr.io/heapster-grafana-amd64 v4.4.3 8cb3de219af7 2 years ago 155 MB
3) 部署qyy-nginx pods [root@srv1 ~]# microk8s kubectl create deployment qyy-nginx --image=nginx deployment.apps/qyy-nginx created
[root@srv1 ~]# microk8s kubectl get pods NAME READY STATUS RESTARTS AGE qyy-nginx-848dcf5499-2m7bj 1/1 Running 0 15m
4) 显示qyy-nginx环境 [root@srv1 ~]# microk8s kubectl exec qyy-nginx-848dcf5499-2m7bj -- env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=qyy-nginx-848dcf5499-2m7bj NGINX_VERSION=1.19.2 NJS_VERSION=0.4.3 PKG_RELEASE=1~buster KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=10.152.183.1 KUBERNETES_SERVICE_HOST=10.152.183.1 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://10.152.183.1:443 KUBERNETES_PORT_443_TCP=tcp://10.152.183.1:443 HOME=/root
5) 访问qyy-nginx Pods [root@srv1 ~]# microk8s kubectl exec -it qyy-nginx-848dcf5499-2m7bj -- bash root@qyy-nginx-848dcf5499-2m7bj:/# hostname qyy-nginx-848dcf5499-2m7bj root@qyy-nginx-848dcf5499-2m7bj:/# date Fri Aug 14 13:50:00 UTC 2020 root@qyy-nginx-848dcf5499-2m7bj:/# exit exit [root@srv1 ~]#
6) 查看qyy-nginx Pods日志 [root@srv1 ~]# microk8s kubectl logs qyy-nginx-848dcf5499-2m7bj /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf 10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh /docker-entrypoint.sh: Configuration complete; ready for start up
7) 横向扩展qyy-nginx Pods [root@srv1 ~]# microk8s kubectl scale deployment qyy-nginx --replicas=3 deployment.apps/qyy-nginx scaled
[root@srv1 ~]# microk8s kubectl get pods NAME READY STATUS RESTARTS AGE qyy-nginx-848dcf5499-2m7bj 1/1 Running 0 20m qyy-nginx-848dcf5499-jpdcb 1/1 Running 0 17s qyy-nginx-848dcf5499-pshmh 1/1 Running 0 17s
8) 暴露端口 # 将容器的80端口映射为本地宿主机的随机端口 [root@srv1 ~]# microk8s kubectl expose deployment qyy-nginx --type="NodePort" --port 80 service/qyy-nginx exposed
[root@srv1 ~]# microk8s kubectl get services qyy-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE qyy-nginx NodePort 10.152.183.225 <none> 80:30034/TCP 56s 9) 访问测试 (1) 访问测试-ClusterIP访问 [root@srv1 ~]# url 10.152.183.225 ...... ...... <p><em>Thank you for using nginx.</em></p> </body> </html>
(2) 访问测试-宿主机IP访问 [root@srv1 ~]# curl http://srv1.1000y.cloud:30034 ...... ...... <p><em>Thank you for using nginx.</em></p> </body> </html>
10) 删除services [root@srv1 ~]# microk8s kubectl delete services qyy-nginx service "qyy-nginx" deleted
11) 删除Pods [root@srv1 ~]# microk8s kubectl delete deployment qyy-nginx deployment.apps "qyy-nginx" deleted
[root@srv1 ~]# microk8s kubectl get pods No resources found in default namespace.
3. 添加节点(扩展MicroK8S集群)
1) 在主节点生成加入集群的信息
[root@srv1 ~]# microk8s add-node
Join node with: microk8s join 192.168.10.11:25000/d2f1c7ad498793f7a4ed0a58e13dc40a
If the node you are adding is not reachable through the default interface you can use one of the following: microk8s join 192.168.10.11:25000/d2f1c7ad498793f7a4ed0a58e13dc40a microk8s join 10.1.31.0:25000/d2f1c7ad498793f7a4ed0a58e13dc40a
[root@srv1 ~]# firewall-cmd --add-port={25000/tcp,16443/tcp,12379/tcp,10250/tcp,10255/tcp,10257/tcp,10259/tcp} --permanent [root@srv1 ~]# firewall-cmd --reload
2) 在第二个节点安装Snappy [root@srv2 ~]# yum --enablerepo=epel install snapd -y [root@srv2 ~]# ln -s /var/lib/snapd/snap /snap [root@srv2 ~]# echo 'export PATH=$PATH:/var/lib/snapd/snap/bin' > /etc/profile.d/snap.sh [root@srv2 ~]# source /etc/profile.d/snap.sh [root@srv2 ~]# systemctl enable --now snapd.service snapd.socket
3) 安装MicroK8S [root@srv2 ~]# snap install microk8s --classic microk8s v1.18.6 from Canonical✓ installed
4) 加入MicroK8S集群 [root@srv2 ~]# export OPENSSL_CONF=/var/lib/snapd/snap/microk8s/current/etc/ssl/openssl.cnf
[root@srv2 ~]# firewall-cmd --add-port={25000/tcp,10250/tcp,10255/tcp} --permanent [root@srv2 ~]# firewall-cmd --reload
[root@srv2 ~]# microk8s join 192.168.10.11:25000/d2f1c7ad498793f7a4ed0a58e13dc40a 5) 验证 [root@srv1 ~]# microk8s kubectl get nodes NAME STATUS ROLES AGE VERSION srv1.1000y.cloud Ready <none> 102m v1.18.6-1+64f53401f200a7 srv2.1000y.cloud Ready <none> 44s v1.18.6-1+64f53401f200a7
6) 移除节点 [root@srv1 ~]# microk8s remove-node srv2.1000y.cloud
[root@srv1 ~]# microk8s kubectl get nodes NAME STATUS ROLES AGE VERSION srv1.1000y.cloud Ready <none> 104m v1.18.6-1+64f53401f200a7
4. 开启DashBoard
1) 开启DashBoard
[root@srv1 ~]# microk8s enable dashboard dns
Enabling Kubernetes Dashboard
Enabling Metrics-Server
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created      
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created 
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created                          
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.apps/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created                 
clusterrolebinding.rbac.authorization.k8s.io/microk8s-admin created
Adding argument --authentication-token-webhook to nodes.
Applying to node srv2.1000y.cloud.
Metrics-Server is enabled
Applying manifest
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
If RBAC is not enabled access the dashboard using the default token retrieved with:
token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1) microk8s kubectl -n kube-system describe secret $token
In an RBAC enabled setup (microk8s enable RBAC) you need to create a user with restricted permissions as shown in: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
Enabling DNS Applying manifest serviceaccount/coredns created configmap/coredns created deployment.apps/coredns created service/kube-dns created clusterrole.rbac.authorization.k8s.io/coredns created clusterrolebinding.rbac.authorization.k8s.io/coredns created Restarting kubelet Adding argument --cluster-domain to nodes. Applying to node srv2.1000y.cloud. Adding argument --cluster-dns to nodes. Applying to node srv2.1000y.cloud. Restarting nodes. Applying to node srv2.1000y.cloud. DNS is enabled
[root@srv1 ~]# microk8s kubectl get services -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.152.183.32 <none> 8000/TCP 3m54s kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 3m52s kubernetes-dashboard ClusterIP 10.152.183.83 <none> 443/TCP 3m55s metrics-server ClusterIP 10.152.183.74 <none> 443/TCP 3m58s
[root@srv1 ~]# microk8s kubectl -n kube-system get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-588fd544bf-mmwgm 1/1 Running 0 3m22s 10.1.46.8 srv1.1000y.cloud <none> <none> dashboard-metrics-scraper-59f5574d4-7r2vx 1/1 Running 0 3m25s 10.1.46.7 srv1.1000y.cloud <none> <none> kubernetes-dashboard-6d97855997-qs786 1/1 Running 0 3m25s 10.1.46.5 srv1.1000y.cloud <none> <none> metrics-server-c65c9d66-9rsmv 1/1 Running 1 3m30s 10.1.46.6 srv1.1000y.cloud <none> <none>
# 如果发现长时间处于非Running状态,可用一下命令进行检查问题 [root@srv1 ~]# microk8s kubectl -n kube-system describe pod $pod_name
2) 获取token以DashBoard [root@srv1 ~]# microk8s kubectl -n kube-system describe secret $(microk8s kubectl -n kube-system get secret | grep default-token | awk '{print $1}') Name: default-token-ktrw9 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: default kubernetes.io/service-account.uid: 5645bbb6-2548-4879-98f0-bbd3e2bdaca0
Type: kubernetes.io/service-account-token
Data ==== ca.crt: 1103 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IkFmaml4RDhsck9vNG9XRTZFa2Z1OWphNkZlQUlkZXZxbV9MWEthS3BsNjgifQ.eyJpc3MiO......
3) 开放宿主机外部端口,用以外部访问DashBoard [root@srv1 ~]# microk8s kubectl port-forward -n kube-system service/kubernetes-dashboard --address 0.0.0.0 22222:443 Forwarding from 0.0.0.0:22222 -> 8443
4) 访问DashBoard [浏览器]===>https://$srv_ip:22222


5. 使用存储
1) 开启内置存储
[root@srv1 ~]# export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/lib64"
[root@srv1 ~]# microk8s enable storage Enabling default storage class deployment.apps/hostpath-provisioner created storageclass.storage.k8s.io/microk8s-hostpath created serviceaccount/microk8s-hostpath created clusterrole.rbac.authorization.k8s.io/microk8s-hostpath created clusterrolebinding.rbac.authorization.k8s.io/microk8s-hostpath created Storage will be available soon
2) 确认hostpath-provisioner pods存在且为Running状态 [root@srv1 ~]# microk8s kubectl -n kube-system get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES coredns-588fd544bf-mmwgm 1/1 Running 0 20m 10.1.46.8 srv1.1000y.cloud <none> <none> dashboard-metrics-scraper-59f5574d4-7r2vx 1/1 Running 0 20m 10.1.46.7 srv1.1000y.cloud <none> <none> hostpath-provisioner-75fdc8fccd-bsqfn 1/1 Running 0 2m35s 10.1.46.10 srv1.1000y.cloud <none> <none> kubernetes-dashboard-6d97855997-qs786 1/1 Running 0 20m 10.1.46.5 srv1.1000y.cloud <none> <none> metrics-server-c65c9d66-9rsmv 1/1 Running 1 20m 10.1.46.6 srv1.1000y.cloud <none> <none>
3) 使用存储 (1) 创建并应用PVC(持久卷声明) [root@srv1 ~]# vim test-pvc.yml apiVersion: v1 kind: PersistentVolumeClaim metadata: # 设定pvc名称 name: test-pvc spec: # 设定访问权限: # - ReadWriteMany (多节点读写) # - ReadWriteOnce (单节点读写) # - ReadOnlyMany (多节点只读) accessModes: - ReadWriteOnce # 指定microk8s-hostpath storageClassName: microk8s-hostpath resources: requests: # 使用存储的大小 storage: 1Gi
[root@srv1 ~]# microk8s kubectl create -f test-pvc.yml persistentvolumeclaim/test-pvc created
[root@srv1 ~]# microk8s kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-3b849534-7baa-416f-b15b-da3f9d7348f5 1Gi RWO Delete Bound default/test-pvc microk8s-hostpath 16s
[root@srv1 ~]# microk8s kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-pvc Bound pvc-3b849534-7baa-416f-b15b-da3f9d7348f5 1Gi RWO microk8s-hostpath 38s
(2) 创建一个Nginx Pods并使用存储 [root@srv1 ~]# vim nginx-pv.yml apiVersion: v1 kind: Pod metadata: name: nginx-testpv labels: name: nginx-testpv spec: containers: - name: nginx-testpv image: nginx ports: - name: web containerPort: 80 volumeMounts: - name: my-persistent-volume mountPath: /usr/share/nginx/html volumes: - name: my-persistent-volume persistentVolumeClaim: # 指定所要使用的PVC claimName: test-pvc
[root@srv1 ~]# microk8s kubectl create -f nginx-pv.yml pod/nginx-testpv created
[root@srv1 ~]# microk8s kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-testpv 1/1 Running 0 18s 10.1.46.13 srv1.1000y.cloud <none> <none>
(3) 验证PVC路径 [root@srv1 ~]# microk8s kubectl describe -n kube-system pod/hostpath-provisioner-75fdc8fccd-bsqfn | grep PV_DIR PV_DIR: /var/snap/microk8s/common/default-storage
root@srv1 ~]# microk8s kubectl describe pvc/test-pvc | grep ^Volume: Volume: pvc-3b849534-7baa-416f-b15b-da3f9d7348f5
(4) 测试 [root@srv1 ~]# echo "Hello 1000y.cloud" > /var/snap/microk8s/common/default-storage/default-test-pvc-pvc-3b849534-7baa-416f-b15b-da3f9d7348f5/index.html
[root@srv1 ~]# curl 10.1.46.13 Hello 1000y.cloud
6. 使用Registry
1) 开启内置Registry
# 启用后端存储大小为30G的注册表
如果未指定,则默认大小为20G
可在MicroK8s 1.18.3或更高版本上指定尺寸
[root@srv1 ~]# microk8s enable registry:size=30Gi Addon storage is already enabled. Enabling the private registry Applying registry manifest namespace/container-registry created persistentvolumeclaim/registry-claim created deployment.apps/registry created service/registry created The registry is enabled The size of the persistent volume is 30Gi
2) 确认内置Registry的Pod运行 [root@srv1 ~]# microk8s kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE container-registry registry-7cf58dcdcc-2slsm 1/1 Running 0 7m20s default nginx-testpv 1/1 Running 0 14m kube-system coredns-588fd544bf-mmwgm 1/1 Running 0 61m kube-system dashboard-metrics-scraper-59f5574d4-7r2vx 1/1 Running 0 61m kube-system hostpath-provisioner-75fdc8fccd-bsqfn 1/1 Running 0 43m kube-system kubernetes-dashboard-6d97855997-qs786 1/1 Running 0 61m kube-system metrics-server-c65c9d66-9rsmv 1/1 Running 1 61m
[root@srv1 containers]# microk8s kubectl -n container-registry get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE registry NodePort 10.152.183.226 <none> 5000:32000/TCP 11m
3) 启用Registry [root@docker1 ~]# vim /etc/sysconfig/docker OPTIONS='--insecure-registry srv1.1000y.cloud:32000 --selinux-enabled --log-driver=journald.....'
[root@docker1 ~]# systemctl restart docker
4) 上传测试 [root@docker1 ~]# docker tag docker.io/library/nginx:latest srv1.1000y.cloud:32000/my-nginx:registry
[root@docker1 ~]# docker push srv1.1000y.cloud:32000/my-nginx:registry Getting image source signatures Copying blob 550333325e31 done Copying blob a4d893caa5c9 done Copying blob 0338db614b95 done Copying blob d0f104dc0a1f done Copying blob 22ea89b1a816 done Copying config 4bb46517ca done Writing manifest to image destination Storing signatures
5) 下载测试 [root@docker1 ~]# docker rmi srv1.1000y.cloud:32000/my-nginx:registry
[root@docker1 ~]# docker pull srv1.1000y.cloud:32000/my-nginx:registry Trying to pull srv1.1000y.cloud:32000/my-nginx:registry... Getting image source signatures Copying blob 2cd306a3f88c skipped: already exists Copying blob 7fdd3343c128 skipped: already exists Copying blob 165a1dee380a skipped: already exists Copying blob 91bee552f464 skipped: already exists Copying blob d92c10fdaa3a skipped: already exists Copying config 4bb46517ca done Writing manifest to image destination Storing signatures 4bb46517cac397bdb0bab6eba09b0e1f8e90ddd17cf99662997c3253531136f8
7. 开启Fluentd---ELK功能
1) 开启内置Fluentd插件及Pod
[root@srv1 ~]# microk8s enable fluentd dns
Enabling Fluentd-Elasticsearch
Labeling nodes
node/srv1.1000y.cloud labeled
Addon dns is already enabled.
service/elasticsearch-logging created
serviceaccount/elasticsearch-logging created
clusterrole.rbac.authorization.k8s.io/elasticsearch-logging created
clusterrolebinding.rbac.authorization.k8s.io/elasticsearch-logging created
statefulset.apps/elasticsearch-logging created
configmap/fluentd-es-config-v0.1.5 created
serviceaccount/fluentd-es created
clusterrole.rbac.authorization.k8s.io/fluentd-es created
clusterrolebinding.rbac.authorization.k8s.io/fluentd-es created
daemonset.apps/fluentd-es-v2.2.0 created
deployment.apps/kibana-logging created
service/kibana-logging created
Fluentd-Elasticsearch is enabled
Addon dns is already enabled.
[root@srv1 ~]# microk8s kubectl get services -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.152.183.225 <none> 8000/TCP 38m elasticsearch-logging ClusterIP 10.152.183.50 <none> 9200/TCP 40s kibana-logging ClusterIP 10.152.183.52 <none> 5601/TCP 39s kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 37m kubernetes-dashboard ClusterIP 10.152.183.120 <none> 443/TCP 38m metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 38m
[root@srv1 ~]# microk8s kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE ...... ...... kube-system coredns-588fd544bf-xwjvk 1/1 Running 0 68m kube-system elasticsearch-logging-0 1/1 Running 0 31m kube-system fluentd-es-v2.2.0-4w555 1/1 Running 0 31m kube-system kibana-logging-84f486f46b-fzn9j 1/1 Running 0 31m ...... ......
2) 验证ELK集群信息 [root@srv1 ~]# microk8s kubectl cluster-info Kubernetes master is running at https://127.0.0.1:16443 Elasticsearch is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/elasticsearch-logging/proxy Kibana is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kibana-logging/proxy CoreDNS is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://127.0.0.1:16443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
3) 运行kube proxy并开放访问权限 [root@srv1 ~]# microk8s kubectl proxy --address=0.0.0.0 --accept-hosts=.* Starting to serve on [::]:8001
4) 防火墙设定 [root@srv1 ~]# firewall-cmd --add-port=8001/tcp --permanent [root@srv1 ~]# firewall-cmd --reload
5) 访问测试 [浏览器]===>http://MicroK8S主节点:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy






8. 开启Prometheus插件
1) 开启内置Fluentd插件并运行相关Pods
[root@srv1 ~]# microk8s enable prometheus dashboard dns
Enabling Prometheus
namespace/monitoring created 
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created       
serviceaccount/prometheus-operator created               
alertmanager.monitoring.coreos.com/main created                 
secret/alertmanager-main created  
service/alertmanager-main created        
serviceaccount/alertmanager-main created                       
servicemonitor.monitoring.coreos.com/alertmanager created  
secret/grafana-datasources created
......
......
......
......
......
......
servicemonitor.monitoring.coreos.com/kube-controller-manager created
servicemonitor.monitoring.coreos.com/kube-scheduler created
servicemonitor.monitoring.coreos.com/kubelet created
The Prometheus operator is enabled (user/pass: admin/admin)
Addon dashboard is already enabled.
Addon dns is already enabled.
[root@srv1 ~]# microk8s kubectl get services -n monitoring NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE alertmanager-main ClusterIP 10.152.183.51 <none> 9093/TCP 68s grafana ClusterIP 10.152.183.240 <none> 3000/TCP 63s kube-state-metrics ClusterIP None <none> 8443/TCP,9443/TCP 60s node-exporter ClusterIP None <none> 9100/TCP 59s prometheus-adapter ClusterIP 10.152.183.146 <none> 443/TCP 57s prometheus-k8s ClusterIP 10.152.183.222 <none> 9090/TCP 53s prometheus-operator ClusterIP None <none> 8080/TCP 77s
[root@srv1 ~]# microk8s kubectl get pods -n monitoring NAME READY STATUS RESTARTS AGE alertmanager-main-0 2/2 Running 0 8m32s grafana-fbb6785d5-l5qzt 1/1 Running 0 12m kube-state-metrics-dcc94d9f8-hm5zr 3/3 Running 0 12m node-exporter-7tmh4 2/2 Running 0 12m prometheus-adapter-5949969998-dbp6f 1/1 Running 0 12m prometheus-k8s-0 3/3 Running 1 8m21s prometheus-operator-5c7dcf954-hgkqd 1/1 Running 0 12m
2) 授权访问 (1) Prometheus UI [root@srv1 ~]# microk8s kubectl port-forward -n monitoring service/prometheus-k8s --address 0.0.0.0 9090:9090 & Forwarding from 0.0.0.0:9090 -> 9090
(2) Grafana UI [root@srv1 ~]# microk8s kubectl port-forward -n monitoring service/grafana --address 0.0.0.0 3000:3000 & Forwarding from 0.0.0.0:3000 -> 3000
4) 防火墙设定 [root@srv1 ~]# firewall-cmd --add-port={9090/tcp,3000/tcp} --permanent [root@srv1 ~]# firewall-cmd --reload
5) 访问测试 (1) 访问Prometheus UI [浏览器]===>http://MicroK8S主节点:9090




(2) 访问Grafana UI [浏览器]===>http://MicroK8S主节点:3000===>===用户名及密码均为admin
















 

如对您有帮助,请随缘打个赏。^-^

gold