当前位置: 首页 > news >正文

Ubuntu K8s

https://serious-lose.notion.site/Ubuntu-K8s-d8d6a978ad784c1baa2fc8c531fbce68?pvs=74

2 核 2G Ubuntu 20.4 IP 172.24.53.10

kubeadmkubeletkubectl
版本1.23.01.23.01.23.0

kubeadmkubelet 和 kubectl 是 Kubernetes 生态系统中的三个重要组件

kubeadm

  • 主要用于 Kubernetes 集群的安装与管理。它提供了一种简单而快速的方法来创建一个 Kubernetes 集群,包括初始化控制平面节点、添加工作节点,以及管理集群的生命周期。
  • 通过 kubeadm init 命令可以在主节点上初始化一个新的集群,而通过 kubeadm join 可以将工作节点加入到已有的集群中。

kubelet

  • 是 Kubernetes 中的核心组件之一,负责在每个工作节点上管理 Pods 的生命周期。
  • 它负责从 API 服务器获取 Pod 的配置,并确保它们在节点上运行。具体来说,kubelet 会监控容器的状态,并确保它们按照期望的状态运行。
  • kubelet 也负责执行健康检查,报告节点和容器的状态。

kubectl

  • 是 Kubernetes 的命令行工具,允许用户与 Kubernetes 集群进行交互。
  • 用户可以使用 kubectl 来部署应用、管理集群资源、排查故障等。例如,kubectl get pods 可以列出当前运行的 Pods,kubectl apply -f <file> 可以根据配置文件创建或更新资源。

Kubernetes(K8s)支持多种容器运行时(Container Runtime),这些运行时负责管理容器的生命周期,包括拉取镜像、创建、运行和停止容器。

以下是一些常见的 Kubernetes 容器运行时:

1. Docker

  • 描述:Docker 是最常用的容器运行时,它通过提供一个简单的方法来打包、分发和运行应用程序,以便在不同环境中保持一致的行为。
  • 状态:虽然 Docker 仍然被广泛使用,但自 Kubernetes 1.20 版本起,Kubernetes 将不再直接支持 Docker 作为容器运行时。Docker 通过其底层的容器运行时(如 containerd)被 Kubernetes 支持。

2. containerd

  • 描述:containerd 是一个高性能的容器运行时,负责管理容器生命周期的基本功能。它是 Docker 中的一个组件,后来被单独发展,以便可以独立使用。
  • 特点:它支持 OCI(Open Container Initiative)标准,因此与 Kubernetes 兼容性好。

3. CRI-O

  • 描述:CRI-O 是一个轻量级的容器运行时,专门为 Kubernetes 的 CRI(Container Runtime Interface)设计。它允许 Kubernetes 使用任何支持 OCI 标准的容器镜像,并提供与 K8s 的良好集成。
  • 特点:CRIo 旨在提供一个简化的环境,消除不必要的组件,从而优化 Kubernetes 的性能。

4. Podman

  • 描述:Podman 是一个无守护进程的容器工具,允许用户以非特权模式运行容器。它与 Docker 的 CLI 接口相似,因此容易上手。
  • 特点:Podman 支持管理 pods 以及单个容器,并具有与 Kubernetes 的集成能力,但它不是 Kubernetes 默认支持的运行时。

5. rkt

  • 描述:rkt(发音为 "rocket")是 CoreOS 开发的容器运行时,旨在提供更强的安全性和灵活性。尽管 rkt 曾与 Kubernetes 集成,但自 Kubernetes 1.20 版本起,主要得益于 containerd 和 CRI-O 的流行支持。
  • 状态:rkt 在社区中的使用有所减少。

修改主机名

hostnamectl set-hostname master

关闭swap

// 临时关闭
swapoff -a
//  永久关闭

关闭防火墙

ufw disable

查看防火墙状态

ufw status
Status: inactive

设置网桥参数

cat << EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

安装 kubelet kubeadm kubectl

// 最新版本
sudo apt install -y kubelet kubeadm kubectlor// 指定版本
sudo apt  install -y kubelet=1.23.0-00 kubeadm=1.23.0-00 kubectl=1.23.0-00
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:kubeadm kubectl kubelet
0 upgraded, 3 newly installed, 0 to remove and 85 not upgraded.
Need to get 37.0 MB of archives.
After this operation, 216 MB of additional disk space will be used.
Get:1 <https://mirrors.aliyun.com/kubernetes/apt> kubernetes-xenial/main amd64 kubelet amd64 1.23.0-00 [19.5 MB]
Get:2 <https://mirrors.aliyun.com/kubernetes/apt> kubernetes-xenial/main amd64 kubectl amd64 1.23.0-00 [8,932 kB]
Get:3 <https://mirrors.aliyun.com/kubernetes/apt> kubernetes-xenial/main amd64 kubeadm amd64 1.23.0-00 [8,588 kB]
Fetched 37.0 MB in 2s (17.8 MB/s)    
Selecting previously unselected package kubelet.
(Reading database ... 133517 files and directories currently installed.)
Preparing to unpack .../kubelet_1.23.0-00_amd64.deb ...
Unpacking kubelet (1.23.0-00) ...
Selecting previously unselected package kubectl.
Preparing to unpack .../kubectl_1.23.0-00_amd64.deb ...
Unpacking kubectl (1.23.0-00) ...
Selecting previously unselected package kubeadm.
Preparing to unpack .../kubeadm_1.23.0-00_amd64.deb ...
Unpacking kubeadm (1.23.0-00) ...
Setting up kubectl (1.23.0-00) ...
Setting up kubelet (1.23.0-00) ...
Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
Setting up kubeadm (1.23.0-00) ...

锁定版本

sudo apt-mark hold kubelet kubeadm kubectl

预检查,确保您的环境适合 Kubernetes 集群的运行。

sudo kubeadm init phase preflight
I1210 22:20:42.740114  277592 version.go:255] remote version is much newer: v1.31.3; falling back to: stable-1.23
W1210 22:20:52.741562  277592 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "<https://dl.k8s.io/release/stable-1.23.txt>": Get "<https://cdn.dl.k8s.io/release/stable-1.23.txt>": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1210 22:20:52.741591  277592 version.go:104] falling back to the local client version: v1.23.0
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 27.3.1. Latest validated version: 20.10
error execution phase preflight: [preflight] Some fatal errors occurred:[ERROR Port-6443]: Port 6443 is in use  // 端口号被占用[ERROR Port-10259]: Port 10259 is in use // 端口号被占用[ERROR Port-10257]: Port 10257 is in use // 端口号被占用[ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists // 文件已存在[ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists // 文件已存在[ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists // 文件已存在[ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists // 文件已存在[ERROR Port-10250]: Port 10250 is in use // 端口号被占用[ERROR Port-2379]: Port 2379 is in use // 端口号被占用[ERROR Port-2380]: Port 2380 is in use // 端口号被占用[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

kubeadm init 注意 apiserver-advertise-address 改为自己 IP, kubernetes-version 版本改为自己版本

sudo kubeadm init \\
--apiserver-advertise-address=172.24.53.10 \\
--apiserver-bind-port=6443 \\
--image-repository=registry.aliyuncs.com/google_containers \\
--kubernetes-version=v1.23.0 \\
--service-cidr=10.96.0.0/12 \\
--pod-network-cidr=10.244.0.0/16 \\
--ignore-preflight-errors=all
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 27.3.1. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.24.53.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.012070 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node izuf6dy59yl2x7ri2b5dmqz as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node izuf6dy59yl2x7ri2b5dmqz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: j1fth4.uofits58xvvkcfmm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:<https://kubernetes.io/docs/concepts/cluster-administration/addons/>Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.24.53.10:6443 --token j1fth4.uofits58xvvkcfmm \\--discovery-token-ca-cert-hash sha256:a2cf513f8205220c3a912467550cf607ac281716b7c0109a96db203898fe58f8 

复制配置文件到 .kube 下

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Kubernetes 集群中所有命名空间(namespaces)下的 Pods

kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-hrfqc                           0/1     Pending   0          24h
kube-system   coredns-6d8c4cb4d-xkvwn                           0/1     Pending   0          24h
kube-system   etcd-izuf6dy59yl2x7ri2b5dmqz                      1/1     Running   1          24h
kube-system   kube-apiserver-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   1          24h
kube-system   kube-controller-manager-izuf6dy59yl2x7ri2b5dmqz   1/1     Running   1          24h
kube-system   kube-proxy-666qb                                  1/1     Running   0          24h
kube-system   kube-scheduler-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   1          24h

Kubernetes 集群中的所有节点(nodes)

kubectl get nodes
NAME                      STATUS     ROLES                  AGE   VERSION
izuf6dy59yl2x7ri2b5dmqz   NotReady   control-plane,master   24h   v1.23.0

安装calico

kubectl apply -f calico.yaml
kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-6d768559b-dpcmf           1/1     Running   0             85s
kube-system   calico-node-w8vlc                                 1/1     Running   0             85s
kube-system   coredns-6d8c4cb4d-hrfqc                           1/1     Running   0             25h
kube-system   coredns-6d8c4cb4d-xkvwn                           1/1     Running   0             25h
kube-system   etcd-izuf6dy59yl2x7ri2b5dmqz                      1/1     Running   1             25h
kube-system   kube-apiserver-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   1             25h
kube-system   kube-controller-manager-izuf6dy59yl2x7ri2b5dmqz   1/1     Running   2 (43s ago)   25h
kube-system   kube-proxy-666qb                                  1/1     Running   0             25h
kube-system   kube-scheduler-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   2 (42s ago)   25h

重新安装 kubelet


卸载 kubelet kubeadm kubectl

sudo apt-get remove --purge kubelet kubeadm kubectl

停止kubelet服务

sudo systemctl stop kubelet

重置kubeadm

sudo kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1209 23:08:52.957867  100146 reset.go:101] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: configmaps "kubeadm-config" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1209 23:08:56.665154  100146 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

删除/var/lib/kubelet目录下的数据:kubeadm reset命令虽然会删除很多数据,但是/var/lib/kubelet目录下的数据并不会被删除,为了完全恢复到初始状态,需要手动删除这个目录下的数据。

sudo rm -rf /var/lib/kubelet

删除 ~/.kube

sudo rm -rf ~/.kube

master初始化

sudo kubeadm init \\
--apiserver-advertise-address=172.24.53.10 \\
--apiserver-bind-port=6443 \\
--image-repository=registry.aliyuncs.com/google_containers \\
--kubernetes-version=v1.23.0 \\
--service-cidr=10.96.0.0/12 \\
--pod-network-cidr=10.244.0.0/16 \\
--ignore-preflight-errors=all
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 27.3.1. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.24.53.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.012070 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node izuf6dy59yl2x7ri2b5dmqz as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node izuf6dy59yl2x7ri2b5dmqz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: j1fth4.uofits58xvvkcfmm
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:<https://kubernetes.io/docs/concepts/cluster-administration/addons/>Then you can join any number of worker nodes by running the following on each as root:kubeadm join 172.24.53.10:6443 --token j1fth4.uofits58xvvkcfmm \\--discovery-token-ca-cert-hash sha256:a2cf513f8205220c3a912467550cf607ac281716b7c0109a96db203898fe58f8 

ERROR

[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 27.3.1. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.24.53.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all Kubernetes containers running in docker:- 'docker ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 27.3.1. Latest validated version: 20.10
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.24.53.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [izuf6dy59yl2x7ri2b5dmqz localhost] and IPs [172.24.53.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL <http://localhost:10248/healthz>' failed with error: Get "<http://localhost:10248/healthz>": dial tcp [::1]:10248: connect: connection refused.Unfortunately, an error has occurred:timed out waiting for the conditionThis error is likely caused by:- The kubelet is not running- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all Kubernetes containers running in docker:- 'docker ps -a | grep kube | grep -v pause'Once you have found the failing container, you can inspect its logs with:- 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

查看详细日志

journalctl -xeu kubelet or init  命令后增加 --v=5 再次执行,查看详细日志

ERROR

“Failed to run kubelet” err=“failed to run Kubelet: misconfiguration: kubelet cgroup driver: “systemd” is different from docker cgroup driver: “cgroupfs”

kubeletcgroup 驱动 默认是systemd 修改docker配置文件

vim /etc/docker/daemon.json

增加 "exec-opts": ["native.cgroupdriver=systemd"] 配置 native.cgroupdriver=systemdsystemd

{"exec-opts": ["native.cgroupdriver=systemd"]
}

重新加载服务配置文件 和 重启 docker 服务

systemctl daemon-reload && systemctl restart docker

查看docker配置是否修改成功

docker info 
Client: Docker Engine - CommunityVersion:    27.3.1Context:    defaultDebug Mode: falsePlugins:buildx: Docker Buildx (Docker Inc.)Version:  v0.17.1Path:     /usr/libexec/docker/cli-plugins/docker-buildxcompose: Docker Compose (Docker Inc.)Version:  v2.29.7Path:     /usr/libexec/docker/cli-plugins/docker-composeServer:Containers: 13Running: 10Paused: 0Stopped: 3Images: 44Server Version: 27.3.1Storage Driver: overlay2Backing Filesystem: extfsSupports d_type: trueUsing metacopy: falseNative Overlay Diff: trueuserxattr: falseLogging Driver: json-fileCgroup Driver: systemdCgroup Version: 1Plugins:Volume: localNetwork: bridge host ipvlan macvlan null overlayLog: awslogs fluentd gcplogs gelf journald json-file local splunk syslogSwarm: inactiveRuntimes: io.containerd.runc.v2 runcDefault Runtime: runcInit Binary: docker-initcontainerd version: 57f17b0a6295a39009d861b89e3b3b87b005ca27runc version: v1.1.14-0-g2c9f560init version: de40ad0Security Options:apparmorseccompProfile: builtinKernel Version: 5.4.0-182-genericOperating System: Ubuntu 20.04.6 LTSOSType: linuxArchitecture: x86_64CPUs: 2Total Memory: 1.846GiBName: iZuf6dy59yl2x7ri2b5dmqZID: b3e06c55-1127-4727-b716-ace762d5ba1bDocker Root Dir: /var/lib/dockerDebug Mode: falseExperimental: trueInsecure Registries:106.15.127.43172.24.53.10harbor.net.com127.0.0.0/8Registry Mirrors:<https://lbccmm6e.mirror.aliyuncs.com/><https://docker.registry.cyou/><https://docker-cf.registry.cyou/><https://dockercf.jsdelivr.fyi/><https://docker.jsdelivr.fyi/><https://dockertest.jsdelivr.fyi/><https://mirror.aliyuncs.com/><https://dockerproxy.com/><https://mirror.baidubce.com/><https://docker.m.daocloud.io/><https://docker.nju.edu.cn/><https://docker.mirrors.sjtug.sjtu.edu.cn/><https://docker.mirrors.ustc.edu.cn/><https://mirror.iscas.ac.cn/><https://docker.rainbond.cc/>Live Restore Enabled: false

停止Kubernetes

sudo systemctl stop kubelet

重置和清理由 kubeadm 创建的集群状态

sudo kubeadm reset 是 Kubernetes 中的一个命令,用于重置和清理由 kubeadm 创建的集群状态。这个命令非常有用,尤其是在需要删除现有的 Kubernetes 集群或重新初始化集群时。以下是该命令的详细解释:

功能和作用

  1. 清除集群状态
    • kubeadm reset 会删除集群中的所有 Kubernetes 配置和状态信息,包括所有通过 kubeadm 安装的组件(如 API server、controller manager、scheduler 等)。
  2. 恢复到初始状态
    • 该命令的执行将 Kubernetes 节点恢复到未初始化状态,允许用户重新开始集群的安装或升级过程。
  3. 删除 Kubelet 和 CNI 组件
    • Kubelet 的配置文件及其他由 kubeadm 提供的组件(如网络插件)都将被删除。这意味着如果您在节点上安装了网络附加组件(如 Flannel、Calico 等),这些组件也会被移除。
  4. 清除 iptables 和网络设置
    • 该命令会清理所有与 Kubernetes 相关的iptables规则,确保从头开始没有多余的网络干扰。

使用场景

  • 重新初始化集群:在测试或开发环境中,您可能需要频繁重置集群以尝试不同的设置或配置。
  • 修复问题:当集群出现无法修复的故障或问题时,可能需要重置集群并重新开始设置。
  • 拆除多余的配置:如果您尝试过多个配置和参数并想开始一个干净的环境,该命令将帮助您实现这一点。

注意事项

  • 数据丢失:使用 kubeadm reset 将删除 Kubernetes 相关的信息和设置,这可能导致丢失所有的 Pod、ReplicaSet、Deployments、Services 等。
  • 持久存储:虽然 kubeadm reset 删除的是集群状态和配置,但如果存储卷是以持久化的方式挂载的,它们可能需要手动清理。
  • 依赖于 Kubelet:在执行此命令之前,请确保 Kubelet 是正在运行的状态。如果 Kubelet 停止工作,可能会导致重置失败。
sudo kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W1209 23:08:52.957867  100146 reset.go:101] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: configmaps "kubeadm-config" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1209 23:08:56.665154  100146 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

重新init

sudo kubeadm init \\
--apiserver-advertise-address=172.24.53.10 \\
--apiserver-bind-port=6443 \\
--image-repository=registry.aliyuncs.com/google_containers \\
--kubernetes-version=v1.23.0 \\
--service-cidr=10.96.0.0/12 \\
--pod-network-cidr=10.244.0.0/16 \\
--ignore-preflight-errors=all \\
--v=5

确认/etc/systemd/system/ 下是否存在 kubelet.service 文件

复制文件到kubelet.service /etc/systemd/system/,注意 /root/kubelet.service 这个是之前备份的路径,或者新建也行 kubelet.service

cp /root/kubelet.service /etc/systemd/system/
vim /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/[Service]
ExecStart=/usr/bin/kubelet
#ExecStartPre=/usr/bin/kubelet-pre-start.sh
Restart=always
StartLimitInterval=0
RestartSec=10[Install]
WantedBy=multi-user.target

安装 kubectl dashboard

下载配置文件并修改名称

wget <https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml>
mv recommended.yaml dashboard.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     <http://www.apache.org/licenses/LICENSE-2.0>
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.apiVersion: v1
kind: Namespace
metadata:name: kubernetes-dashboard---apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Service
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:ports:- port: 443targetPort: 8443nodePort: 30001  // 增加selector:k8s-app: kubernetes-dashboardtype: NodePort // 增加
---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboard
type: Opaque---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-csrfnamespace: kubernetes-dashboard
type: Opaque
data:csrf: ""---apiVersion: v1
kind: Secret
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboard
type: Opaque---kind: ConfigMap
apiVersion: v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
rules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]verbs: ["get"]---kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard
rules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: kubernetes-dashboard
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboard
subjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:containers:- name: kubernetes-dashboardimage: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard:v2.7.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.# - --apiserver-host=http://my-address:portvolumeMounts:- name: kubernetes-dashboard-certsmountPath: /certs# Create on-disk volume to store exec logs- mountPath: /tmpname: tmp-volumelivenessProbe:httpGet:scheme: HTTPSpath: /port: 8443initialDelaySeconds: 30timeoutSeconds: 30securityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001volumes:- name: kubernetes-dashboard-certssecret:secretName: kubernetes-dashboard-certs- name: tmp-volumeemptyDir: {}serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: Service
apiVersion: v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: Deployment
apiVersion: apps/v1
metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboard
spec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: dashboard-metrics-scraperimage: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/metrics-scraper:v1.0.8ports:- containerPort: 8000protocol: TCPlivenessProbe:httpGet:scheme: HTTPpath: /port: 8000initialDelaySeconds: 30timeoutSeconds: 30volumeMounts:- mountPath: /tmpname: tmp-volumesecurityContext:allowPrivilegeEscalation: falsereadOnlyRootFilesystem: truerunAsUser: 1001runAsGroup: 2001serviceAccountName: kubernetes-dashboardnodeSelector:"kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

检查镜像

grep "image:" dashboard.yaml
          image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard:v2.7.0image: swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/metrics-scraper:v1.0.8

配置为国内镜像

sed -i 's#kubernetesui/dashboard:v2.4.0#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/dashboard:v2.7.0#' dashboard.yaml
sed -i 's#kubernetesui/metrics-scraper:v1.0.7#swr.cn-north-4.myhuaweicloud.com/ddn-k8s/docker.io/kubernetesui/metrics-scraper:v1.0.8#' dashboard.yaml

部署dashboard

kubectl apply -f dashboard.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

检查服务状态

kubectl get pods -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-5b6f6c8f45-wp9dq   1/1     Running   0          13m
kubernetes-dashboard-6c685b564c-gqrf2        1/1     Running   0          13m

网址为

<https://106.15.127.43:30001/#/login>

image.png

获取 token

# 创建用户
kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created

# 用户授权
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
# 获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-m2hps
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-adminkubernetes.io/service-account.uid: 3802d503-3f30-439b-82c8-1a875e115f6cType:  kubernetes.io/service-account-tokenData
====
ca.crt:     1099 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ind4R0daN0Rla3lFSEdzaWRMS3JpRl9vRWpkT1lieGtXeDVrbS1FdkhyYVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbTJocHMiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzgwMmQ1MDMtM2YzMC00MzliLTgyYzgtMWE4NzVlMTE1ZjZjIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.MB85CeNXsLW1vNH_mKwSFxFcl57RzRc9Ou9CoqrTEVSNC42B4O0qLPdzx6_WD5KLR3jWSB3ynmSgkMHV1uzuve7CeHDK6UeoyR4v4o0Sldk8PGGVaHykacVzKg14kX0qoBHHSXBLJZp8s_-dzzIBNeexUEZiIjoUVKgOJUjp2k-_LGzH4GN8d6oKfo5M37XsDZ5cFQvfedGRNAho-GQkVCssVXt1SJlDwCdefhlmJa_D-awLlE3khrYimt_1CmFIibBoSfHI2Q55CgNveAagCvBM0c6HvmtZDHwdRfT96XwTyN1hJDZwVGFHq3uv-QtLhEAPia1CMWxOvvBQQZQUA

image.png

部署应用测试 部署一个nginx应用,并以 NodePort 形式暴露此nginx。创建一个 nginx-deploy.yaml 文件

apiVersion: apps/v1
kind: Deployment
metadata:labels:app: nginx-deployname: nginx-deploy
spec:replicas: 1selector:matchLabels:app: nginx-deploytemplate:metadata:labels:app: nginx-deployspec:containers:- image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/nginx:1.25.4name: nginxports:- containerPort: 80---apiVersion: v1
kind: Service
metadata:labels:app: nginx-deployname: nginx-svc
spec:ports:- port: 80protocol: TCPtargetPort: 80nodePort: 30080selector:app: nginx-deploytype: NodePort

部署

kubectl apply -f nginx-deploy.yaml
deployment.apps/nginx-deploy created
service/nginx-svc created

查看 pod

kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS         AGE
default       nginx-deploy-69d69b77bb-gqm6s                     1/1     Running   0                10m // 成功
kube-system   calico-kube-controllers-6d768559b-dpcmf           1/1     Running   3 (30m ago)      22h
kube-system   calico-node-w8vlc                                 1/1     Running   6 (15m ago)      22h
kube-system   coredns-6d8c4cb4d-hrfqc                           1/1     Running   0                2d
kube-system   coredns-6d8c4cb4d-xkvwn                           1/1     Running   0                2d
kube-system   etcd-izuf6dy59yl2x7ri2b5dmqz                      1/1     Running   1                2d
kube-system   kube-apiserver-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   4 (30m ago)      2d
kube-system   kube-controller-manager-izuf6dy59yl2x7ri2b5dmqz   1/1     Running   14 (8m47s ago)   2d
kube-system   kube-proxy-666qb                                  1/1     Running   0                2d
kube-system   kube-scheduler-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   16 (8m49s ago)   2d

image.png

删除 Deployment

kubectl delete deployment nginx-deploy --namespace=default
deployment.apps "nginx-deploy" deleted

删除 pod

kubectl delete pod nginx-deploy-69d69b77bb-gqm6s --namespace=default
pod "nginx-deploy-69d69b77bb-gqm6s" deleted

查看 pod

kubectl get pods --all-namespaces
NAMESPACE     NAME                                              READY   STATUS    RESTARTS         AGE
kube-system   calico-kube-controllers-6d768559b-dpcmf           1/1     Running   3 (30m ago)      22h
kube-system   calico-node-w8vlc                                 1/1     Running   6 (15m ago)      22h
kube-system   coredns-6d8c4cb4d-hrfqc                           1/1     Running   0                2d
kube-system   coredns-6d8c4cb4d-xkvwn                           1/1     Running   0                2d
kube-system   etcd-izuf6dy59yl2x7ri2b5dmqz                      1/1     Running   1                2d
kube-system   kube-apiserver-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   4 (30m ago)      2d
kube-system   kube-controller-manager-izuf6dy59yl2x7ri2b5dmqz   1/1     Running   14 (8m47s ago)   2d
kube-system   kube-proxy-666qb                                  1/1     Running   0                2d
kube-system   kube-scheduler-izuf6dy59yl2x7ri2b5dmqz            1/1     Running   16 (8m49s ago)   2d

master

查看当前污点

kubectl describe node <master-node-name>  

在输出中查找“Taints”部分,通常 master 节点上会有类似以下的污点:

node-role.kubernetes.io/master:NoSchedule 

移除污点

kubectl taint nodes <master-node-name> node-role.kubernetes.io/master:NoSchedule-
kubectl taint nodes izuf6dy59yl2x7ri2b5dmqz node-role.kubernetes.io/master:NoSchedule-
node/izuf6dy59yl2x7ri2b5dmqz untainted

重启Kubernetes

sudo systemctl restart kubelet

Linux虚拟机搭建K8S环境_linux 部署k8环境-CSDN博客

在Ubuntu22.04 LTS上搭建Kubernetes集群-阿里云开发者社区

Ubuntu22安装K8S实战-CSDN博客

Ubuntu下Kubernetes(k8s)集群搭建_ubuntu安装kubernetes-CSDN博客

https://zhuanlan.zhihu.com/p/709347673?utm_campaign=shareopn&utm_medium=social&utm_psn=1849349834034769920&utm_source=wechat_session

https://y2k38.github.io/use-kubeadm-to-deploy-k8s-cluster/#部署k8s

Linux下minikube启动失败(It seems like the kubelet isn‘t running or healthy)(1)-阿里云开发者社区

解决k8s kubeadm init初始化报 - The kubelet is not running - The kubelet is unhealthy due_kubernetes_Long long ago.-K8S/Kubernetes

kuberuntime_sandbox.go:70] “Failed to create sandbox for pod“ err=“rpc error_golang_喝醉酒的小白-K8S/Kubernetes

Ubuntu安装K8S(1.28版本,基于containrd) - 自学精灵

Ubuntu20.04安装/卸载K8S 1.23.17版本_ubuntu 2204 删除kubernetes-CSDN博客

K8S服务搭建过程中出现的憨批错误_failed to load kubelet config file" err="failed to-CSDN博客

Kubernetes集群重置与初始化:kubeadm reset命令-CSDN博客

Kubernetes K8s 解决 This error is likely caused by: - The kubelet is not running-CSDN博客

ubuntu系统安装k8s1.28精简步骤_unbuntu 精简-CSDN博客

Kubernetes因限制内存配置引发的错误 - 墨天轮

pod sandbox rpc error 这个错误通常是由于 Kubernetes 网络配置出现问题导致_mob64ca12df277e的技术博客_51CTO博客

kubeasz/docs/guide/dashboard.md at master · easzlab/kubeasz · GitHub

相关文章:

Ubuntu K8s

https://serious-lose.notion.site/Ubuntu-K8s-d8d6a978ad784c1baa2fc8c531fbce68?pvs74 2 核 2G Ubuntu 20.4 IP 172.24.53.10 kubeadmkubeletkubectl版本1.23.01.23.01.23.0 kubeadm、kubelet 和 kubectl 是 Kubernetes 生态系统中的三个重要组件 kubeadm&#xff1a; 主…...

大数据第三次周赛

类斐波那契循环数 #include<bits/stdc.h> using namespace std; #define int long long int arr[1000010]; bool key(int k){int num0;string strto_string(k);for(int i0;i<str.length();i){arr[num]str[i]-0;}int l0,rnum-1;int shix0; while(shix<k){shix0;for…...

《Java核心技术I》Swing用户界面组件

Swing和模型-视图-控制器设计模式 用户界面组件各个组成部分&#xff0c;如按钮&#xff0c;复选框&#xff0c;文本框或复杂的树控件&#xff0c;每个组件都有三个特征&#xff1a; 内容&#xff0c;如按钮的状态&#xff0c;文本域中的文本。外观&#xff0c;颜色&#xff0c…...

Web开发 -前端部分-CSS

CSS CSS&#xff08;Cascading Style Sheet&#xff09;:层叠样式表&#xff0c;用于控制页面的样式&#xff08;表现&#xff09;。 一 基础知识 1 标题格式 标题格式一&#xff1a; 行内样式 <!DOCTYPE html> <html lang"en"><head><meta…...

旅游系统旅游小程序PHP+Uniapp

旅游门票预订系统&#xff0c;支持景点门票、导游产品便捷预订、美食打卡、景点分享、旅游笔记分享等综合系统 更新日志 V1.3.0 1、修复富文本标签 2、新增景点入驻【高级版本】3、新增门票核销【高级版】4、新增门票端口【高级版】...

Windows 与 Linux 下 Ping IPv6 地址 | 常用网络命令

注&#xff1a;本文为网络命令相关文章合辑。 未整理去重。 一、IPv6 概述 IPv6 即 “Internet 协议版本 6”&#xff0c;因 IPv4 地址资源面临耗尽问题而被引入以替代 IPv4。IPv6 则提供了理论上多达 2 128 2^{128} 2128 个地址&#xff0c;有效解决地址不足困境。 IPv6 具…...

前端学习一

一 进程与线程 线程是进程执行的最小单位&#xff0c;进程是系统分配任务的最小单位。 一个进程可执行最少一个线程。线程分为子线程和主线程。 主线程关闭则子线程关闭。 二 浏览器进程 浏览器是多进程多线程应用。 进程包括&#xff1a; 浏览器进程 负责程序交互渲染…...

【Python · PyTorch】卷积神经网络(基础概念)

【Python PyTorch】卷积神经网络 CNN&#xff08;基础概念&#xff09; 0. 生物学相似性1. 概念1.1 定义1.2 优势1.2.1 权重共享1.2.2 局部连接1.2.3 层次结构 1.3 结构1.4 数据预处理1.4.1 标签编码① One-Hot编码 / 独热编码② Word Embedding / 词嵌入 1.4.2 归一化① Min-…...

Spring Framework 路径遍历漏洞复现(CVE-2024-38819)

hu0x01 产品描述: Spring Framework 是一个功能强大的 Java 应用程序框架,旨在提供高效且可扩展的开发环境。它结合了轻量级的容器和依赖注入功能,提供了一种使用 POJO 进行容器配置和面向切面的编程的简单方法,以及一组用于AOP的模块。0x02 漏洞描述: Spring Framework 存…...

心法利器[122] | 算法面试的八股和非八股讨论

心法利器 本栏目主要和大家一起讨论近期自己学习的心得和体会。具体介绍&#xff1a;仓颉专项&#xff1a;飞机大炮我都会&#xff0c;利器心法我还有。 2023年新的文章合集已经发布&#xff0c;获取方式看这里&#xff1a;又添十万字-CS的陋室2023年文章合集来袭&#xff0c;更…...

实操给自助触摸一体机接入大模型语音交互

本文以CSK6 大模型开发板串口触摸屏为例&#xff0c;实操讲解触摸一体机怎样快速增加大模型语音交互功能&#xff0c;使用户能够通过语音在一体机上查询信息、获取智能回答及实现更多互动功能等。 在本文方案中通过CSK6大模型语音开发板采集用户语音&#xff0c;将语音数据传输…...

AJAX家政系统自营+多商家家政系统服务小程序PHP+Uniapp

一款同城预约、上门服务、到店核销家政系统&#xff0c;用户端、服务端、门店端各端相互依赖又相互独立&#xff0c;支持选择项目、选择服务人员、选择门店多种下单方式&#xff0c;支持上门服务和到店核销两种服务方式&#xff0c;支持自营和多商家联营两种运营模式&#xff0…...

LiveData源码研究

LiveData 源码分析 前言 用过MVVM的大概知道LiveData可以感知组件的生命周期&#xff0c;当页面活跃时&#xff0c;更新页面数据&#xff0c; 当页面处于非活跃状态&#xff0c;它又会暂停更新&#xff0c;还能自动注册和注销观测者&#xff0c;能有效避免内存泄漏和不必要的…...

Root软件学习

一、命令行输入下方命令打开root文件 root filename.root 二、在root命令行下输入.help查看root下可用的指令 .help输入.q是退出root命令行 .q 三、输入下方指令查看当前打开的root文件的目录 .ls 四、打印Hits树下的内容&#xff08;print&#xff09; 方框里是各种树文…...

研发文档管理系统:国内外9大选择比较

文章主要对比了9款国内外研发文档管理系统&#xff1a;1.PingCode&#xff1b; 2. Worktile&#xff1b; 3. 飞书&#xff1b; 4. 石墨文档&#xff1b; 5. 腾讯文档&#xff1b; 6. 蓝湖&#xff1b; 7. Confluence&#xff1b; 8. Notion&#xff1b; 9. Slab。 在企业研发过…...

centos 7.9 freeswitch1.10.9环境搭建

亲测版本centos 7.9系统–》 freeswitch1.10.9 一、下载插件 yum install -y git alsa-lib-devel autoconf automake bison broadvoice-devel bzip2 curl-devel libdb4-devel e2fsprogs-devel erlang flite-devel g722_1-devel gcc-c++ gdbm-devel gnutls-devel ilbc2...

嵌入式驱动开发详解17(CAN驱动开发)

文章目录 前言CAN简介CAN收发器CAN协议讲解电气特性传输协议数据帧遥控帧错误帧过载帧帧间隔 同步矫正 CAN控制器CAN控制器模式CAN接收器CAN波特率 CAN设备树分析CAN测试后续参考文献 前言 该专栏主要是讲解嵌入式相关的驱动开发&#xff0c;但是由于部分模块的驱动框架过于复…...

探索 Janus-1.3B:一个统一的 Any-to-Any 多模态理解与生成模型

随着多模态技术的不断发展&#xff0c;越来越多的模型被提出以解决跨文本与图像等多种数据类型的任务。Janus-1.3B 是由 DeepSeek 推出的一个革命性的模型&#xff0c;它通过解耦视觉编码并采用统一的 Transformer 架构&#xff0c;带来了一个高度灵活的 any-to-any 多模态框架…...

黑马头条day01 微服务搭建

1.请求调用流程 如http://localhost:8803/static/js/2.0195d7180dc783c3fe99.js这种静态资源&#xff0c;采用http的发送到本地8803端口的静态资源请求&#xff0c;而nginx配置的监听8801、8802、8803&#xff0c;所以请求走到nginx&#xff0c;nginx的admin配置文件 upstream…...

AI辅助编程工具对比:Cursor AI、Windsurf AI 和 GitHub Copilot

功能和特性 1. Cursor AI 基于VS Code构建&#xff0c;集成了GPT-4等多个AI模型&#xff0c;提供高级智能支持。支持AI代码补全、错误修正以及通过自然语言执行命令。具备多文件编辑和上下文理解能力&#xff0c;能够在复杂项目中提供跨文件的智能建议。提供标签功能&#xf…...

【Qt】qt安装

在工作一年之后&#xff0c;还是想做一个Qt的教程&#xff0c;遥想研一刚刚接触Qt&#xff0c;从0到1学习&#xff0c;没有什么参考书籍&#xff0c;网上的资料也不多&#xff0c;幸好Qt官方文档写得好&#xff0c;加上自己肯研究&#xff0c;才堪堪入门。 现在我想自己写一个…...

课设项目十:智能手电筒(使用金沙滩51单片机)

00 题目介绍 功能&#xff1a; 硬件设置&#xff1a; 使用51单片机连接光敏传感器、LED灯和手电筒开关按钮。 环境感知&#xff1a; 实时监测周围光照强度。 LED控制&#xff1a; 根据光照强度自动控制LED灯的开关。 手动控制&#xff1a; 提供手电筒开关按钮&#xff0c;…...

Oracle中COUNT函数对NULL和空字符串的处理方式

Oracle中&#xff0c;使用COUNT函数的时候&#xff0c;COUNT()和COUNT(null)得到的结果都是0&#xff0c;也就是说&#xff0c;如果我们COUNT中选择的那列属性中为null的或者的那行是不会被计数的。MySQL中count(null)效果和Oracle中一样&#xff0c;但是count()能正常计数。 在…...

OpenHarmony和OpenVela的技术创新以及两者对比

两款有名的国内开源操作系统&#xff0c;OpenHarmony&#xff0c;OpenVela都非常的优秀。本文对二者的创新进行一个简要的介绍和对比。 一、OpenHarmony OpenHarmony具有诸多有特点的技术突破和重要贡献&#xff0c;以下是一些主要方面&#xff1a; 架构设计创新 分层架构…...

Windows常用命令

该篇文章是博主不断从工作中总结而来&#xff0c;会持续不断更新 文件和目录管理命令 列出指定目录中的文件和子目录&#xff1a;dir 路径 更改当前工作目录&#xff1a;cd 路径 创建新目录&#xff1a;mkdir 目录名 删除空目录&#xff1a;rmdir 目录名 删除指定文件&#xf…...

牛客网 SQL2查询多列

SQL2查询多列 select device_id,gender,age,university //查询哪些字段 from user_profile //从哪个表中查找 每日问题 C 中面向对象编程如何处理异常&#xff1f; 在C中&#xff0c;面向对象编程&#xff08;OOP&#xff09;处理异常主要通过异常处理机制来实现。C 提供了…...

容器,网络基础

小结&#xff1a; 1、利用网桥和虚拟网卡 2、利用Veth Pair虚拟设备&#xff0c;一个网卡可以直接出现在另外一个网卡中 一个Linux容器能看见的“网络栈”&#xff0c;实际上是被隔离在它自己的Network Namespace当中的 “网络栈”&#xff0c;就包括了&#xff1a;网卡&#…...

Treap树堆【东北大学oj数据结构8-4】C++

题面 二叉搜索树会因为插入的数据的值可能变得不平衡&#xff0c;搜索/插入/删除操作的效率变得低效。例如&#xff0c;如果依次插入 n 个升序的数据&#xff0c;则树将看起来像一个列表&#xff0c;其高度将为 n&#xff0c;并且查询时间变得很长。一个解决策略是随意打乱要插…...

基于STM32的智电表系统课题设计思路:python友好界面、ADC、UART串口、数据分析

1. 项目选题与需求分析 1.1 选题背景和动机 随着社会的快速发展&#xff0c;电力的消耗不断增加&#xff0c;如何高效管理和监测用电成为了一个重要的课题。传统的电表只能提供简单的用电计量&#xff0c;无法满足现代家庭和工业对用电数据实时监控、远程控制及数据分析的需求…...

博弈论1:拿走游戏(take-away game)

假设你和小红打赌&#xff0c;玩“拿走游戏”&#xff0c;输的人请对方吃饭.... 你们面前有21个筹码&#xff0c;放成一堆&#xff1b;每轮你或者小红可以从筹码堆中拿走1个/2个/3个&#xff1b;第一轮你先拿&#xff0c;第二轮小红拿&#xff0c;你们两个人交替进行;拿走筹码堆…...

【人工智能解读】神经网络(CNN)的特点及其应用场景器学习(Machine Learning, ML)的基本概念

前些天发现了一个巨牛的人工智能学习网站&#xff0c;通俗易懂&#xff0c;风趣幽默&#xff0c; 忍不住分享一下给大家。点击跳转到网站 学习总结 1、掌握 JAVA入门到进阶知识(持续写作中……&#xff09; 2、学会Oracle数据库入门到入土用法(创作中……&#xff09; 3、手把…...

Spring Cloud与Spring Cloud Alibaba:全面解析与核心要点

Spring Cloud与Spring Cloud Alibaba&#xff1a;全面解析与核心要点 一、引言 在当今的分布式系统开发领域&#xff0c;Spring Cloud和Spring Cloud Alibaba都是极为重要的框架。它们为构建大规模、高可用、分布式的应用系统提供了丰富的工具和组件。本文将深入探讨Spring C…...

Java 泛型

1. 泛型 (1) 泛型&#xff1a;定义类、接口、方法时&#xff0c;同事声明了一个或多个类型变量(如<E>)&#xff0c;称为泛型类、泛型接口、泛型方法、它们统称为泛型。可以在编译阶段约束要操作的数据类型 public static void main(String[] args) {//没加泛型 可以放任何…...

CentOS7 搭建 MQTT(mosquitto)环境并收发数据

零&#xff1a;说在前面 最近在研究物联网相关内容&#xff0c;需要接收 Modbus 协议的数据。上游数据源提出由对方整合数据后使用 MQTT 协议将数据发送过来&#xff0c;因此需要了解一下什么是 MQTT。 首先&#xff0c;它是一个类似 kafka 的“发布/订阅”模式的消息框架&…...

操作系统课后习题2.2节

操作系统课后习题2.2节 第1题 CPU的效率指的是CPU的执行速度&#xff0c;这个是由CPU的设计和它的硬件来决定的&#xff0c;具体的调度算法是不能提高CPU的效率的&#xff1b; 第3题 互斥性&#xff1a; 指的是进程之间的同步互斥关系&#xff0c;进程是一个动态的过程&#…...

Mac Goland dlv 升级

Mac Goland dlv 升级 问题表现 WARNING: undefined behavior - version of Delve is too old for Go version 1.22.1 (maximum supported version 1.21)查看当前Goland dlv 版本 ☁ ~ /Applications/GoLand.app/Contents/plugins/go-plugin/lib/dlv/mac/dlv version Delve…...

vue使用pdfh5.js插件,显示pdf文件白屏

pdfh5&#xff0c;展示文件白屏&#xff0c;无报错 实现效果图解决方法(降版本)排查问题过程发现问题查找问题根源1、代码写错了&#xff1f;2、预览文件流的问题&#xff1f;3、pdfh5插件更新了&#xff0c;我的依赖包没更新&#xff1f;4、真相大白 彩蛋 实现效果图 解决方法…...

【FFmpeg】FFmpeg 内存结构 ⑥ ( 搭建开发环境 | AVPacket 创建与释放代码分析 | AVPacket 内存使用注意事项 )

文章目录 一、搭建开发环境1、开发环境搭建参考2、项目搭建 二、AVPacket 创建与释放代码分析1、AVPacket 创建与释放代码2、Qt 单步调试方法3、单步调试 - 分析 AVPacket 创建与销毁代码 三、AVPacket 内存使用注意事项1、谨慎使用 av_init_packet 函数2、av_init_packet 函数…...

[Unity Shader] 【游戏开发】【图形渲染】Unity Shader的种类2-顶点/片元着色器与固定函数着色器的选择与应用

Unity 提供了不同种类的 Shader,每种 Shader 有其独特的优势和适用场景。在所有类型的 Shader 中,顶点/片元着色器(Vertex/Fragment Shader)与固定函数着色器(Fixed Function Shader)是两种重要的着色器类型。尽管它们具有不同的编写方式和用途,理解其差异与应用场景,对…...

Unity 动画曲线研究(Dotween插件)

动画的曲线的介绍 动画曲线&#xff08;Animation Curve&#xff09;是一种用于描述动画属性值随时间变化的图形工具。 我们可以通过给自己的动画片段设定不同的动画曲线&#xff0c;使动画效果具有不同表现力。 常见的动画曲线设定有&#xff1a; 线性&#xff08;Linear&…...

适合小白的超详细yolov8环境配置+实例运行教程,从零开始教你如何使用yolov8训练自己的数据集(Windows+conda+pycharm)

目录 一、前期准备所需环境配置 1.1. 虚拟环境创建 1.2 下载yolov8源码&#xff0c;在pycharm中进行配置 1.2.1 下载源码 1.2.2 在pycharm终端中配置conda 1.3 在pycharm的terminal中激活虚拟环境 1.4 安装requirements.txt中的相关包 1.5 pip安装其他包 1.6 预训练…...

Linux中输入和输出基本过程

1.文件内核级缓冲区 前面在如何理解Linux一切皆文件的特点中提到为了保证在Linux中所有进程访问文件时的方式趋近相 同&#xff0c;在f ile 结构体中存在一个 files_operations 结构体指针&#xff0c;对应的结构体保存所有文件操作的函 数指针&#xff08;这个结构体也被称为…...

二、FIFO缓存

FIFO缓存 1.FIFO缓存介绍2.FIFO缓存实现3.FIFO缓存总结 1.FIFO缓存介绍 FIFO&#xff08;First-In-First-Out&#xff09;缓存 是一种简单的缓存淘汰策略&#xff0c;它基于先进先出的原则来管理数据。当缓存达到容量限制并需要淘汰元素时&#xff0c;最先进入缓存的元素会被移…...

Linux_挂载nas

1、安装挂载nas必要的服务 yum -y install nfs-utils rpcbind 2、挂载nas sudo mount -t nfs -o vers3,nolock,prototcp,noresvport <NAS-IP>:/path/to/shared /yourNasPath mount 命令详解&#xff1a; -t &#xff1a;文件系统类型 &#xff0c;这里指定的挂载类…...

uni-app开发AI康复锻炼小程序,帮助肢体受伤患者康复!

**提要&#xff1a;**近段时间我们收到多个康复机构用户&#xff0c;咨询AI运动识别插件是否可以应用于肢力运动受限患者的康复锻炼中来&#xff0c;插件是可以应用到AI康复锻炼中的&#xff0c;今天小编就为您介绍一下AI运动识别插件在康腹锻炼中的应用场景。 一、康复机构的应…...

现代密码学总结(上篇)

现代密码学总结 &#xff08;v.1.0.0版本&#xff09;之后会更新内容 基本说明&#xff1a; ∙ \bullet ∙如果 A A A是随机算法&#xff0c; y ← A ( x ) y\leftarrow A(x) y←A(x)表示输入为 x x x ,通过均匀选择 的随机带运行 A A A,并且将输出赋给 y y y。 ∙ \bullet …...

按照字幕拆解视频实战

1. 基本实现思路 字幕文件处理&#xff1a; 提取字幕内容和时间戳&#xff08;如 SRT 文件格式&#xff09;。解析字幕中的开始时间和结束时间。 视频切割&#xff1a; 使用字幕的时间戳&#xff0c;剪辑对应时间段的视频。每段字幕对应一个子视频。 输出子视频&#xff1a; …...

2.11.静态链表

一.静态链表的基本概念&#xff1a; 1.上图说明&#xff1a;索引为0处是头结点&#xff0c;头结点不存储数据&#xff0c;但存储下一个结点的数组下标&#xff0c;本例中头结点里存储的下一个结点的数组下标为2&#xff0c;即索引为2的结点为头结点后的第一个结点&#xff0c;以…...

分页查询在数据库中的好处

分页查询在数据库中的好处主要体现在以下几个方面&#xff1a; 提高性能&#xff1a; 减少数据传输&#xff1a;分页查询只返回请求的页面数据&#xff0c;而不是整个数据集&#xff0c;这减少了网络传输的数据量&#xff0c;降低了网络延迟和带宽消耗。减少内存使用&#xff1…...

电子应用设计方案-54:智能AI人工智能机器人系统方案设计

智能 AI 人工智能机器人系统方案设计 一、引言 随着人工智能技术的快速发展&#xff0c;智能 AI 机器人在各个领域的应用越来越广泛。本方案旨在设计一个功能强大、智能高效、交互友好的人工智能机器人系统&#xff0c;以满足不同场景下的用户需求。 二、系统概述 1. 系统目标…...