cka 考题

发布时间 2023-11-21 21:51:56作者: 烟雨楼台,行云流水

3.1.1 第 1 道题 RBAC 作业提交规范https://kubernetes.io/zh-cn/docs/reference/access-authn-authz/rbac/
我们做第一个题 RBAC,做完之后提交作业按照如下说明,我给大家提供的标准解题步骤如下:
解题:
考试时执行,切换集群。模拟环境中不需要执行。

root@master1:~# kubectl create clusterrole deployment-clusterrole --verb=create --resource=deployments,statefulsets,daemonsets
clusterrole.rbac.authorization.k8s.io/deployment-clusterrole created
root@master1:~# kubectl create ns app-team1
namespace/app-team1 created
root@master1:~# kubectl create sa cicd-token -n app-team1
serviceaccount/cicd-token created
root@master1:~# kubectl create clusterrolebinding chenxi -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
clusterrolebinding.rbac.authorization.k8s.io/chenxi created
root@master1:~# kubectl create rolebinding chenxi -n app-team1 --clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
rolebinding.rbac.authorization.k8s.io/chenxi created
root@master1:~# kubectl describe rolebinding chenxi -n app-team1 
Name:         chenxi
Labels:       <none>
Annotations:  <none>
Role:
  Kind:  ClusterRole
  Name:  deployment-clusterrole
Subjects:
  Kind            Name        Namespace
  ----            ----        ---------
  ServiceAccount  cicd-token  app-team1




[student@node-1]$ kubectl config use-context k8s
[student@node-1] $ kubectl create clusterrole deployment-clusterrole --verb=create --
resource=deployments,statefulsets,daemonsets
[student@node-1] $ kubectl create serviceaccount cicd-token -n app-team1
# 题目中写了“限于 namespace app-team1 中”,则创建 rolebinding。没有写的话,则创建
clusterrolebinding。
[student@node-1] $ kubectl create rolebinding cicd-token-binding -n app-team1 --
clusterrole=deployment-clusterrole --serviceaccount=app-team1:cicd-token
# rolebinding 后面的名字 cicd-token-rolebinding 随便起的,因为题目中没有要求,如果题目中
有要求,就不能随便起了。

  

3.1.2 第 2 道题节点维护作业提交规范

 我们做第 2 个题节点维护,做完之后提交作业按照如下说明,

[student@node-1] $kubectl config use-context ek8s
[student@node-1] $kubectl cordon ek8s-node-1 #设置节点是不可调度状态
[student@node-1] $kubectl drain ek8s-node-1 --delete-emptydir-data --ignore-daemonsets --force

  

3.1.3 第 3 道题 k8s 版本升级作业提交规范   官网搜索kubeadm-upgrade  地址:https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/
我们做第 3 个题 k8s 版本升级时候,做完之后提交作业按照如下说明,我给大家提供的标准解题步
骤如下:
解题:
考试时执行,切换集群。模拟环境中不需要执行。
root@master1:~# kubectl get node 查看
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   15h   v1.23.1
node1     Ready    <none>                 15h   v1.23.1
root@master1:~# kubectl cordon master1 
node/master1 cordoned
root@master1:~# kubectl drain master1 --delete-emptydir-data --ignore-daemonsets --force
node/master1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-zhb6k, kube-system/kube-proxy-l9fdg
evicting pod kube-system/coredns-65c54cc984-4dkqh
evicting pod kube-system/calico-kube-controllers-677cd97c8d-qnpr9
evicting pod kube-system/coredns-65c54cc984-2xqz8
pod/calico-kube-controllers-677cd97c8d-qnpr9 evicted
pod/coredns-65c54cc984-4dkqh evicted
pod/coredns-65c54cc984-2xqz8 evicted
node/master1 drained

node
root@master1:/home/chenxi# apt-cache show kubeadm | grep 1.23.2
Version: 1.23.2-00
Filename: pool/kubeadm_1.23.2-00_amd64_f3593ab00d33e8c0a19e24c7a8c81e74a02e601d0f1c61559a5fb87658b53563.deb

root@master1:~# kubeadm upgrade apply v1.23.2 --etcd-upgrade=false --force
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1121 13:01:19.726418  696317 utils.go:69] The recommended value for "resolvConf" in "KubeletConfiguration" is: /run/systemd/resolve/resolv.conf; the provided value is: /run/systemd/resolve/resolv.conf
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.23.2"
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.23.1
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set: 

	- Specified version to upgrade to "v1.23.2" is higher than the kubeadm version "v1.23.1". Upgrade kubeadm first using the tool you used to install kubeadm
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.23.2"...
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests2256149495"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-21-13-07-48/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-apiserver-master1 hash: 3c8f61a122c8e355df03d157fa6c23fc
Static pod: kube-apiserver-master1 hash: 6f15f917043f6e456a012e8b45f57c03
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-21-13-07-48/kube-controll
er-manager.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
^[[AStatic pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: 5a2921269046b06a9e27540a966d9134
Static pod: kube-controller-manager-master1 hash: ac2cd7a075ba83f2bae5ad1f8f5516a9
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-11-21-13-07-48/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d7cc8771deae6f604bf4c846a40e8638
Static pod: kube-scheduler-master1 hash: d26a55167803c084a5cb882c2d5bfba7
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm up
grade will handle this transition transparently.[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.23.2". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@master1:~# apt-get install kubelet=1.23.2-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
kubelet is already the newest version (1.23.2-00).
0 upgraded, 0 newly installed, 0 to remove and 55 not upgraded.
root@master1:~# apt-get install kubectl=1.23.2-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be upgraded:
  kubectl
1 upgraded, 0 newly installed, 0 to remove and 55 not upgraded.
Need to get 8,929 kB of archives.
After this operation, 0 B of additional disk space will be used.
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 kubectl amd64 1.23.2-00 [8,929 kB]
Fetched 8,929 kB in 6s (1,602 kB/s)  
(Reading database ... 88009 files and directories currently installed.)
Preparing to unpack .../kubectl_1.23.2-00_amd64.deb ...
Unpacking kubectl (1.23.2-00) over (1.23.1-00) ...
Setting up kubectl (1.23.2-00) ...
root@master1:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.2", GitCommit:"9d142434e3af351a628bffee3939e64c681afa4d", GitTreeState:"clean", BuildDate:"2022-01-19T17:35:46Z", GoVersion:"go1.17.5", Com
piler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.2", GitCommit:"9d142434e3af351a628bffee3939e64c681afa4d", GitTreeState:"clean", BuildDate:"2022-01-19T17:29:16Z", GoVersion:"go1.17.5", Com
piler:"gc", Platform:"linux/amd64"}root@master1:~# kubelet --version
Kubernetes v1.23.2
node节点
root@node1:/home/chenxi# apt-get install kubelet=1.23.2-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following held packages will be changed:
  kubelet
The following packages will be upgraded:
  kubelet
1 upgraded, 0 newly installed, 0 to remove and 59 not upgraded.
Need to get 19.5 MB of archives.
After this operation, 0 B of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial/main amd64 kubelet amd64 1.23.2-00 [19.5 MB]
Fetched 19.5 MB in 6s (3,138 kB/s)                                                                                                                                                                              
(Reading database ... 88305 files and directories currently installed.)
Preparing to unpack .../kubelet_1.23.2-00_amd64.deb ...
Unpacking kubelet (1.23.2-00) over (1.23.1-00) ...
Setting up kubelet (1.23.2-00) ...
root@node1:/home/chenxi# ^C
root@node1:/home/chenxi#  apt-get install kubelet=1.23.2-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
kubelet is already the newest version (1.23.2-00).
0 upgraded, 0 newly installed, 0 to remove and 59 not upgraded.

查看pod 状态
root@master1:/home/chenxi# kubectl get pod -n kube-system -w
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-677cd97c8d-mtlz6   1/1     Running   0             28m
calico-node-nrtpb                          1/1     Running   1 (15h ago)   15h
calico-node-zhb6k                          1/1     Running   1 (15h ago)   15h
coredns-65c54cc984-ht4fs                   1/1     Running   0             28m
coredns-65c54cc984-wfc4s                   1/1     Running   0             28m
etcd-master1                               1/1     Running   1 (15h ago)   15h
kube-apiserver-master1                     1/1     Running   0             7m19s
kube-controller-manager-master1            1/1     Running   0             6m30s
kube-proxy-dtkxb                           1/1     Running   0             5m58s
kube-proxy-ngc6q                           1/1     Running   0             5m55s
kube-scheduler-master1                     1/1     Running   0             6m15s
恢复 master1 调度
root@master1:/home/chenxi# kubectl uncordon master1
node/master1 uncordoned
root@master1:/home/chenxi# kubectl get node   升级后查看
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   15h   v1.23.2
node1     Ready    <none>                 15h   v1.23.2



[student@node-1] $kubectl config use-context mk8s
开始操作
[student@node-1] $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 38d v1.23.1
node-1 Ready <none> 38d v1.23.1
# cordon 停止调度,将 node 调为 SchedulingDisabled。新 pod 不会被调度到该 node,但在该
node 的旧 pod 不受影响。
# drain 驱逐节点。首先,驱逐该 node 上的 pod,并在其他节点重新创建。接着,将节点调
为 SchedulingDisabled。
[student@node-1] $kubectl cordon master01
[student@node-1] $kubectl drain master01 --delete-emptydir-data --ignore-daemonsets --force
# ssh 到 master 节点,并切换到 root 下
[student@node-1] $ ssh master01
[student@master01] $ sudo -i
[root@master01] # apt-cache show kubeadm|grep 1.23.2
[root@master01] #apt-get update
6 / 11
[root@master01] #apt-get install kubeadm=1.23.2-00
# 验证升级计划
[root@master01] #kubeadm upgrade plan
# 排除 etcd,升级其他的,提示时,输入 y。
[root@master01] #kubeadm upgrade apply v1.23.2 --etcd-upgrade=false
升级 kubelet
[root@master01] #apt-get install kubelet=1.23.2-00
[root@master01] #kubelet --version
升级 kubectl
[root@master01] #apt-get install kubectl=1.23.2-00
[root@master01] #kubectl version
# 退出 root,退回到 student@master01
[root@master01] # exit
# 退出 master01,退回到 student@node-1
[student@master01] $ exit
[student@node-1] $
不要输入 exit 多了,否则会退出考试环境的。
恢复 master01 调度
[student@node-1] $ kubectl uncordon master01
检查 master01 是否为 Ready
[student@node-1] $ kubectl get node   
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 38d v1.23.2
node-1 Ready <none> 38d v1.23.1

  

3.1.4 第 4 道题 etcd 数据备份恢复作业提交规范  官网搜索关建子:upgrade-etcd 网址:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/configure-upgrade-etcd/
备份
root@master1:~# mkdir  /srv/data
root@master1:~# sudo ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt  --key=/etc/kubernetes/pki/etcd/server.key sn
apshot save /srv/data/etcd-snapshot.db{"level":"info","ts":1700573892.0990024,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"/srv/data/etcd-snapshot.db.part"}
{"level":"info","ts":"2023-11-21T13:38:12.104Z","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1700573892.104788,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"https://127.0.0.1:2379"}
{"level":"info","ts":"2023-11-21T13:38:12.165Z","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1700573892.1732032,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"https://127.0.0.1:2379","size":"4.1 MB","took":0.07412492}
{"level":"info","ts":1700573892.1735168,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"/srv/data/etcd-snapshot.db"}
Snapshot saved at /srv/data/etcd-snapshot.db
root@master1:~# ls /srv/data/
etcd-snapshot.db

还原

root@master1:~# sudo etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot restore 
/srv/data/etcd-snapshot.db {"level":"info","ts":1700574115.1576464,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"/srv/data/etcd-snapshot.db","wal-dir":"default.etcd/member/wal","data-dir":"default.etcd","snap
-dir":"default.etcd/member/snap"}{"level":"info","ts":1700574115.1802545,"caller":"mvcc/kvstore.go:380","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":7
5402}{"level":"info","ts":1700574115.1919398,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls"
:["http://localhost:2380"]}{"level":"info","ts":1700574115.200254,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"/srv/data/etcd-snapshot.db","wal-dir":"default.etcd/member/wal","data-dir":"default.etcd","snap-d
ir":"default.etcd/member/snap"}





备份:
# 如果不使用 export ETCDCTL_API=3,而使用 ETCDCTL_API=3,则下面每条 etcdctl 命令前都要加
ETCDCTL_API=3。
# 如果执行时,提示 permission denied,则是权限不够,命令最前面加 sudo 即可。
student@node-1:~$ export ETCDCTL_API=3
student@node-1:~$ sudo ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --
cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcdclient.key snapshot save /srv/data/etcd-snapshot.db
还原:
student@node-1:~$ sudo export ETCDCTL_API=3
student@node-1:~$ sudo etcdctl --endpoints="https://127.0.0.1:2379" --
cacert=/opt/KUIN000601/ca.crt --cert=/opt/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcdclient.key snapshot restore /var/lib/backup/etcd-snapshot-previous.db