Harbor 部署
harbor镜像仓库搭建 版本v2.10.3
文章目录
- 一. docker 安装 harbor
- 1. harbor 配置http访问
- 1.1 下载harbor二进制包
- 1.2 修改配置文件
- 1.3 运行
- 1.4 访问
- 2.【可选】harbor 配置https访问
- 2.1 自签证书
- 2.1 修改配置文件
- 2.3 修改hosts文件
- 2.4 运行
- 2.5 访问
- 二. k8s 安装harbor
- 1 .安装NFS
- 2. 安装Helm
- 3. 安装Ingress-nginx
- 3.1 下载包
- 3.2 修改配置文件
- 3.3 拉取镜像
- 3.4 安装
- 4. 安装Harbor(k8s 安装)
- 4.1 下载包
- 4.2 在安装nfs机器上操作
- 4.3 创建PV
- 4.4 完整配置文件
- 4.4 安装
- 4.5 查看容器是否运行
环境介绍
注意 192.168.100.150 这台机器 上面会安装nfs服务 nfs 服务是提供给k8s进行操作的 包括docker 服务这些
操作系统 | Centos 7 | Centos 7 | Centos 7 | Centos 7 |
---|---|---|---|---|
内核版本 | Linux 3.10.0-1160.119.1.el7.x86_64 | Linux 3.10.0-1160.119.1.el7.x86_64 | Linux 3.10.0-1160.119.1.el7.x86_64 | Linux 3.10.0-1160.119.1.el7.x86_64 |
IP | 192.168.100.100 | 192.168.100.200 | 192.168.100.250 | 192.168.100.150 |
docker 版本 | —— | —— | —— | 20.10.15 |
docker-compose 版本 | —— | —— | —— | 1.28.2 |
kubernetes 版本 | 1.28.10 | 1.28.10 | 1.28.10 | —— |
本文档不提供 k8s 和 docker 以及 docker-compose 的安装
~~~Linux(Centos)系统 安装Docker Docker-Compose
k8s安装文档v1.28.10版本
一. docker 安装 harbor
1. harbor 配置http访问
1.1 下载harbor二进制包
# 下载二进制包
wget https://github.com/goharbor/harbor/releases/download/v2.10.3/harbor-offline-installer-v2.10.3.tgz# 解压
mkdir -p /opt/software tar -zxvf -C /opt/software harbor-offline-installer-v2.10.3.tgz
1.2 修改配置文件
cd /opt/software/harbor# 复制一份配置文件
cp harbor.yml.tmpl harbor.yml# 修改几处即可hostname: # 改成本机ip地址# 注释掉 port certificate private_key 即可
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /data/cert/harbor/harbor.dongdong.com.certprivate_key: /data/cert/harbor/harbor.dongdong.com.key# enable strong ssl ciphers (default: false)# strong_ssl_ciphers: false# 默认80端口
external_url: # http://本机IP:端口# 可以选择修改 或者默认
harbor_admin_password: # Harbor12345# 改上面几个就可以了
1.3 运行
# 进入目录
cd /opt/software/harbor# 运行脚本
sh install.sh
# 等待执行完成即可
1.4 访问
# 浏览器输入 http://本机id:端口http://192.168.100.150默认账号 密码
admin Harbor12345# 如果是arm64 架构
# 需要修改 common 文件夹所有者和所有者组的
# chmod 777 -R common
# chown 1001.1002 -R common
# 修改common/config/registry 下面passwd 和 root.crt 文件夹所有者和所有者组的
# chown 10000.10000 passwd root.crt
2.【可选】harbor 配置https访问
2.1 自签证书
mkdir -p /data/cert/harborcd /data/cert/harbor# 生成CA
openssl genrsa -out ca.key 4096
openssl req -x509 -new -nodes -sha512 -days 3650 -subj
"/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.dongdong.com" -key ca.key -out ca.crt# 生成证书请求
openssl genrsa -out harbor.dongdong.com.key 4096
openssl req -sha512 -new -subj "/C=CN/ST=Beijing/L=Beijing/O=example/OU=Personal/CN=harbor.dongdong.com" -key harbor.dongdong.com.key -out harbor.dongdong.com.csrcat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
DNS.1=harbor.dongdong.com
EOF# 生成证书
openssl x509 -req -sha512 -days 3650 -extfile v3.ext -CA ca.crt -CAkey ca.key -CAcreateserial -in harbor.dongdong.com.csr -out harbor.dongdong.com.crt
openssl x509 -inform PEM -in harbor.dongdong.com.crt -out harbor.dongdong.com.certcp harbor.dongdong.com.crt /etc/pki/ca-trust/source/anchors/
update-ca-trust
2.1 修改配置文件
hostname: harbor.dongdong.com# 修改harbor.yml 文件
# 打开这些
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /data/cert/harbor/harbor.dongdong.com.certprivate_key: /data/cert/harbor/harbor.dongdong.com.key# enable strong ssl ciphers (default: false)# strong_ssl_ciphers: false
external_url: https://harbor.dongdong.com
2.3 修改hosts文件
# ip 和 域名映射
vi /etc/hosts192.168.100.150 harbor.dongdong.com
2.4 运行
# 进入目录
cd /opt/software/harbor# 运行脚本
sh install.sh
# 等待执行完成即可
2.5 访问
# 浏览器输入 https://本机id:端口
https://harbor.dongdong.com默认账号 密码
admin Harbor12345
二. k8s 安装harbor
1 .安装NFS
# 在192.168.100.150 上安装# 主包提供文件系统
yum -y install nfs-utils
# 提供rpc协议
yum -y install rpcbind# 启动服务
systemctl restart rpcbind
systemctl restart nfssystemctl enable nfs
systemctl enable rpcbind#查看
exportfs -v#查看存储端共享
showmount -e localhost
2. 安装Helm
# 在master 节点安装就可以了
# 下载helm https://get.helm.sh/helm-v3.15.2-linux-amd64.tar.gz
# 如果没有wget命令 则 yum install -y wget
wget https://get.helm.sh/helm-v3.15.2-linux-amd64.tar.gz# 解压
tar -zxvf helm-v3.15.2-linux-amd64.tar.gzmv linux-amd64/helm /usr/bin# 打印信息
helm versionversion.BuildInfo{Version:"v3.15.2", GitCommit:"1a500d5625419a524fdae4b33de351cc4f58ec35", GitTreeState:"clean", GoVersion:"go1.22.4"}
3. 安装Ingress-nginx
3.1 下载包
# 添加官方仓库
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx# 创建目录
mkdir -p /opt/software/# 下载包
cd /opt/software/ && wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.10.1/ingress-nginx-4.10.1.tgz# 解压
tar -zxvf ingress-nginx-4.10.1.tgz
3.2 修改配置文件
# 进入对应目录修改
cd /opt/software/ingress-nginx### 修改配置文件# 进行提前拉取
## nginx configuration
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/index.md
#### Overrides for generated resource names
# See templates/_helpers.tpl
# nameOverride:
# fullnameOverride:# -- Override the deployment namespace; defaults to .Release.Namespace
namespaceOverride: "ingress-nginx"
## Labels to apply to all resources
##
commonLabels: {}
# scmhash: abc123
# myLabel: aakkmdcontroller:name: controllerenableAnnotationValidations: falseimage:## Keep false as default for now!chroot: falseregistry: registry.k8s.ioimage: ingress-nginx/controller## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: "v1.10.1"#digest: sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e 注释掉#digestChroot: sha256:c155954116b397163c88afcb3252462771bd7867017e8a17623e83601bab7ac7 注释掉pullPolicy: IfNotPresentrunAsNonRoot: true# www-data -> uid 101runAsUser: 101allowPrivilegeEscalation: falseseccompProfile:type: RuntimeDefaultreadOnlyRootFilesystem: false# -- Use an existing PSP instead of creating oneexistingPsp: ""# -- Configures the controller container namecontainerName: controller# -- Configures the ports that the nginx-controller listens oncontainerPort:http: 80https: 443# -- Will add custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/config: {}# -- Annotations to be added to the controller config configuration configmap.configAnnotations: {}# -- Will add custom headers before sending traffic to backends according to https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/custom-headersproxySetHeaders: {}# -- Will add custom headers before sending response traffic to the client according to: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#add-headersaddHeaders: {}# -- Optionally customize the pod dnsConfig.dnsConfig: {}# -- Optionally customize the pod hostAliases.hostAliases: []# - ip: 127.0.0.1# hostnames:# - foo.local# - bar.local# - ip: 10.1.2.3# hostnames:# - foo.remote# - bar.remote# -- Optionally customize the pod hostname.hostname: {}# -- Optionally change this to ClusterFirstWithHostNet in case you have 'hostNetwork: true'.# By default, while using host network, name resolution uses the host's DNS. If you wish nginx-controller# to keep resolving names inside the k8s network, use ClusterFirstWithHostNet.dnsPolicy: ClusterFirstWithHostNet# -- Bare-metal considerations via the host network https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network# Ingress status was blank because there is no Service exposing the Ingress-Nginx Controller in a configuration using the host network, the default --publish-service flag used in standard cloud setups does not applyreportNodeInternalIp: false# -- Process Ingress objects without ingressClass annotation/ingressClassName field# Overrides value for --watch-ingress-without-class flag of the controller binary# Defaults to falsewatchIngressWithoutClass: false# -- Process IngressClass per name (additionally as per spec.controller).ingressClassByName: false# -- This configuration enables Topology Aware Routing feature, used together with service annotation service.kubernetes.io/topology-mode="auto"# Defaults to falseenableTopologyAwareRouting: false# -- This configuration defines if Ingress Controller should allow users to set# their own *-snippet annotations, otherwise this is forbidden / dropped# when users add those annotations.# Global snippets in ConfigMap are still respectedallowSnippetAnnotations: false# -- Required for use with CNI based kubernetes installations (such as ones set up by kubeadm),# since CNI and hostport don't mix yet. Can be deprecated once https://github.com/kubernetes/kubernetes/issues/23920# is mergedhostNetwork: true## Use host ports 80 and 443## Disabled by defaulthostPort:# -- Enable 'hostPort' or notenabled: falseports:# -- 'hostPort' http porthttp: 80# -- 'hostPort' https porthttps: 443# NetworkPolicy for controller component.networkPolicy:# -- Enable 'networkPolicy' or notenabled: false# -- Election ID to use for status update, by default it uses the controller name combined with a suffix of 'leader'electionID: ""# -- This section refers to the creation of the IngressClass resource.# IngressClasses are immutable and cannot be changed after creation.# We do not support namespaced IngressClasses, yet, so a ClusterRole and a ClusterRoleBinding is required.ingressClassResource:# -- Name of the IngressClassname: nginx# -- Create the IngressClass or notenabled: true# -- If true, Ingresses without `ingressClassName` get assigned to this IngressClass on creation.# Ingress creation gets rejected if there are multiple default IngressClasses.# Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#default-ingress-classdefault: false# -- Controller of the IngressClass. An Ingress Controller looks for IngressClasses it should reconcile by this value.# This value is also being set as the `--controller-class` argument of this Ingress Controller.# Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-classcontrollerValue: k8s.io/ingress-nginx# -- A link to a custom resource containing additional configuration for the controller.# This is optional if the controller consuming this IngressClass does not require additional parameters.# Ref: https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-classparameters: {}# parameters:# apiGroup: k8s.example.com# kind: IngressParameters# name: external-lb# -- For backwards compatibility with ingress.class annotation, use ingressClass.# Algorithm is as follows, first ingressClassName is considered, if not present, controller looks for ingress.class annotationingressClass: nginx# -- Labels to add to the pod container metadatapodLabels: {}# key: value# -- Security context for controller podspodSecurityContext: {}# -- sysctls for controller pods## Ref: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/sysctls: {}# sysctls:# "net.core.somaxconn": "8192"# -- Security context for controller containerscontainerSecurityContext: {}# -- Allows customization of the source of the IP address or FQDN to report# in the ingress status field. By default, it reads the information provided# by the service. If disable, the status field reports the IP address of the# node or nodes where an ingress controller pod is running.publishService:# -- Enable 'publishService' or notenabled: true# -- Allows overriding of the publish service to bind to# Must be <namespace>/<service_name>pathOverride: ""# Limit the scope of the controller to a specific namespacescope:# -- Enable 'scope' or notenabled: false# -- Namespace to limit the controller to; defaults to $(POD_NAMESPACE)namespace: ""# -- When scope.enabled == false, instead of watching all namespaces, we watching namespaces whose labels# only match with namespaceSelector. Format like foo=bar. Defaults to empty, means watching all namespaces.namespaceSelector: ""# -- Allows customization of the configmap / nginx-configmap namespace; defaults to $(POD_NAMESPACE)configMapNamespace: ""tcp:# -- Allows customization of the tcp-services-configmap; defaults to $(POD_NAMESPACE)configMapNamespace: ""# -- Annotations to be added to the tcp config configmapannotations: {}udp:# -- Allows customization of the udp-services-configmap; defaults to $(POD_NAMESPACE)configMapNamespace: ""# -- Annotations to be added to the udp config configmapannotations: {}# -- Maxmind license key to download GeoLite2 Databases.## https://blog.maxmind.com/2019/12/18/significant-changes-to-accessing-and-using-geolite2-databasesmaxmindLicenseKey: ""# -- Additional command line arguments to pass to Ingress-Nginx Controller# E.g. to specify the default SSL certificate you can useextraArgs: {}## extraArgs:## default-ssl-certificate: "<namespace>/<secret_name>"## time-buckets: "0.005,0.01,0.025,0.05,0.1,0.25,0.5,1,2.5,5,10"## length-buckets: "10,20,30,40,50,60,70,80,90,100"## size-buckets: "10,100,1000,10000,100000,1e+06,1e+07"# -- Additional environment variables to setextraEnvs: []# extraEnvs:# - name: FOO# valueFrom:# secretKeyRef:# key: FOO# name: secret-resource# -- Use a `DaemonSet` or `Deployment`kind: Deployment# -- Annotations to be added to the controller Deployment or DaemonSet##annotations: {}# keel.sh/pollSchedule: "@every 60m"# -- Labels to be added to the controller Deployment or DaemonSet and other resources that do not have option to specify labels##labels: {}# keel.sh/policy: patch# keel.sh/trigger: poll# -- The update strategy to apply to the Deployment or DaemonSet##updateStrategy: {}# rollingUpdate:# maxUnavailable: 1# type: RollingUpdate# -- `minReadySeconds` to avoid killing pods before we are ready##minReadySeconds: 0# -- Node tolerations for server scheduling to nodes with taints## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/##tolerations: []# - key: "key"# operator: "Equal|Exists"# value: "value"# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"# -- Affinity and anti-affinity rules for server scheduling to nodes## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity##affinity: {}# # An example of preferred pod anti-affinity, weight is in the range 1-100# podAntiAffinity:# preferredDuringSchedulingIgnoredDuringExecution:# - weight: 100# podAffinityTerm:# labelSelector:# matchExpressions:# - key: app.kubernetes.io/name# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/instance# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/component# operator: In# values:# - controller# topologyKey: kubernetes.io/hostname# # An example of required pod anti-affinity# podAntiAffinity:# requiredDuringSchedulingIgnoredDuringExecution:# - labelSelector:# matchExpressions:# - key: app.kubernetes.io/name# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/instance# operator: In# values:# - ingress-nginx# - key: app.kubernetes.io/component# operator: In# values:# - controller# topologyKey: "kubernetes.io/hostname"# -- Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in.## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/##topologySpreadConstraints: []# - labelSelector:# matchLabels:# app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'# app.kubernetes.io/instance: '{{ .Release.Name }}'# app.kubernetes.io/component: controller# topologyKey: topology.kubernetes.io/zone# maxSkew: 1# whenUnsatisfiable: ScheduleAnyway# - labelSelector:# matchLabels:# app.kubernetes.io/name: '{{ include "ingress-nginx.name" . }}'# app.kubernetes.io/instance: '{{ .Release.Name }}'# app.kubernetes.io/component: controller# topologyKey: kubernetes.io/hostname# maxSkew: 1# whenUnsatisfiable: ScheduleAnyway# -- `terminationGracePeriodSeconds` to avoid killing pods before we are ready## wait up to five minutes for the drain of connections##terminationGracePeriodSeconds: 300# -- Node labels for controller pod assignment## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/##nodeSelector:kubernetes.io/os: linux## Liveness and readiness probe values## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes#### startupProbe:## httpGet:## # should match container.healthCheckPath## path: "/healthz"## port: 10254## scheme: HTTP## initialDelaySeconds: 5## periodSeconds: 5## timeoutSeconds: 2## successThreshold: 1## failureThreshold: 5livenessProbe:httpGet:# should match container.healthCheckPathpath: "/healthz"port: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 1successThreshold: 1failureThreshold: 5readinessProbe:httpGet:# should match container.healthCheckPathpath: "/healthz"port: 10254scheme: HTTPinitialDelaySeconds: 10periodSeconds: 10timeoutSeconds: 1successThreshold: 1failureThreshold: 3# -- Path of the health check endpoint. All requests received on the port defined by# the healthz-port parameter are forwarded internally to this path.healthCheckPath: "/healthz"# -- Address to bind the health check endpoint.# It is better to set this option to the internal node address# if the Ingress-Nginx Controller is running in the `hostNetwork: true` mode.healthCheckHost: ""# -- Annotations to be added to controller pods##podAnnotations: {}replicaCount: 1# -- Minimum available pods set in PodDisruptionBudget.# Define either 'minAvailable' or 'maxUnavailable', never both.minAvailable: 1# -- Maximum unavailable pods set in PodDisruptionBudget. If set, 'minAvailable' is ignored.# maxUnavailable: 1## Define requests resources to avoid probe issues due to CPU utilization in busy nodes## ref: https://github.com/kubernetes/ingress-nginx/issues/4735#issuecomment-551204903## Ideally, there should be no limits.## https://engineering.indeedblog.com/blog/2019/12/cpu-throttling-regression-fix/resources:## limits:## cpu: 100m## memory: 90Mirequests:cpu: 100mmemory: 90Mi# Mutually exclusive with keda autoscalingautoscaling:enabled: falseannotations: {}minReplicas: 1maxReplicas: 11targetCPUUtilizationPercentage: 50targetMemoryUtilizationPercentage: 50behavior: {}# scaleDown:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 1# periodSeconds: 180# scaleUp:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 2# periodSeconds: 60autoscalingTemplate: []# Custom or additional autoscaling metrics# ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics# - type: Pods# pods:# metric:# name: nginx_ingress_controller_nginx_process_requests_total# target:# type: AverageValue# averageValue: 10000m# Mutually exclusive with hpa autoscalingkeda:apiVersion: "keda.sh/v1alpha1"## apiVersion changes with keda 1.x vs 2.x## 2.x = keda.sh/v1alpha1## 1.x = keda.k8s.io/v1alpha1enabled: falseminReplicas: 1maxReplicas: 11pollingInterval: 30cooldownPeriod: 300# fallback:# failureThreshold: 3# replicas: 11restoreToOriginalReplicaCount: falsescaledObject:annotations: {}# Custom annotations for ScaledObject resource# annotations:# key: valuetriggers: []# - type: prometheus# metadata:# serverAddress: http://<prometheus-host>:9090# metricName: http_requests_total# threshold: '100'# query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))behavior: {}# scaleDown:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 1# periodSeconds: 180# scaleUp:# stabilizationWindowSeconds: 300# policies:# - type: Pods# value: 2# periodSeconds: 60# -- Enable mimalloc as a drop-in replacement for malloc.## ref: https://github.com/microsoft/mimalloc##enableMimalloc: true## Override NGINX templatecustomTemplate:configMapName: ""configMapKey: ""service:# -- Enable controller services or not. This does not influence the creation of either the admission webhook or the metrics service.enabled: trueexternal:# -- Enable the external controller service or not. Useful for internal-only deployments.enabled: true# -- Annotations to be added to the external controller service. See `controller.service.internal.annotations` for annotations to be added to the internal controller service.annotations: {}# -- Labels to be added to both controller services.labels: {}# -- Type of the external controller service.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-typestype: LoadBalancer# -- Pre-defined cluster internal IP address of the external controller service. Take care of collisions with existing services.# This value is immutable. Set once, it can not be changed without deleting and re-creating the service.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-addressclusterIP: ""# -- List of node IP addresses at which the external controller service is available.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ipsexternalIPs: []# -- Deprecated: Pre-defined IP address of the external controller service. Used by cloud providers to connect the resulting load balancer service to a pre-existing static IP.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancerloadBalancerIP: ""# -- Restrict access to the external controller service. Values must be CIDRs. Allows any source address by default.loadBalancerSourceRanges: []# -- Load balancer class of the external controller service. Used by cloud providers to select a load balancer implementation other than the cloud provider default.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-classloadBalancerClass: ""# -- Enable node port allocation for the external controller service or not. Applies to type `LoadBalancer` only.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation# allocateLoadBalancerNodePorts: true# -- External traffic policy of the external controller service. Set to "Local" to preserve source IP on providers supporting it.# Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ipexternalTrafficPolicy: ""# -- Session affinity of the external controller service. Must be either "None" or "ClientIP" if set. Defaults to "None".# Ref: https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinitysessionAffinity: ""# -- Specifies the health check node port (numeric port number) for the external controller service.# If not specified, the service controller allocates a port from your cluster's node port range.# Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip# healthCheckNodePort: 0# -- Represents the dual-stack capabilities of the external controller service. Possible values are SingleStack, PreferDualStack or RequireDualStack.# Fields `ipFamilies` and `clusterIP` depend on the value of this field.# Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#servicesipFamilyPolicy: SingleStack# -- List of IP families (e.g. IPv4, IPv6) assigned to the external controller service. This field is usually assigned automatically based on cluster configuration and the `ipFamilyPolicy` field.# Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#servicesipFamilies:- IPv4# -- Enable the HTTP listener on both controller services or not.enableHttp: true# -- Enable the HTTPS listener on both controller services or not.enableHttps: trueports:# -- Port the external HTTP listener is published with.http: 80# -- Port the external HTTPS listener is published with.https: 443targetPorts:# -- Port of the ingress controller the external HTTP listener is mapped to.http: http# -- Port of the ingress controller the external HTTPS listener is mapped to.https: https# -- Declare the app protocol of the external HTTP and HTTPS listeners or not. Supersedes provider-specific annotations for declaring the backend protocol.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#application-protocolappProtocol: truenodePorts:# -- Node port allocated for the external HTTP listener. If left empty, the service controller allocates one from the configured node port range.http: ""# -- Node port allocated for the external HTTPS listener. If left empty, the service controller allocates one from the configured node port range.https: ""# -- Node port mapping for external TCP listeners. If left empty, the service controller allocates them from the configured node port range.# Example:# tcp:# 8080: 30080tcp: {}# -- Node port mapping for external UDP listeners. If left empty, the service controller allocates them from the configured node port range.# Example:# udp:# 53: 30053udp: {}internal:# -- Enable the internal controller service or not. Remember to configure `controller.service.internal.annotations` when enabling this.enabled: false# -- Annotations to be added to the internal controller service. Mandatory for the internal controller service to be created. Varies with the cloud service.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancerannotations: {}# -- Type of the internal controller service.# Defaults to the value of `controller.service.type`.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-typestype: "NodePort"# -- Pre-defined cluster internal IP address of the internal controller service. Take care of collisions with existing services.# This value is immutable. Set once, it can not be changed without deleting and re-creating the service.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-addressclusterIP: ""# -- List of node IP addresses at which the internal controller service is available.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ipsexternalIPs: []# -- Deprecated: Pre-defined IP address of the internal controller service. Used by cloud providers to connect the resulting load balancer service to a pre-existing static IP.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancerloadBalancerIP: ""# -- Restrict access to the internal controller service. Values must be CIDRs. Allows any source address by default.loadBalancerSourceRanges: []# -- Load balancer class of the internal controller service. Used by cloud providers to select a load balancer implementation other than the cloud provider default.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-classloadBalancerClass: ""# -- Enable node port allocation for the internal controller service or not. Applies to type `LoadBalancer` only.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#load-balancer-nodeport-allocation# allocateLoadBalancerNodePorts: true# -- External traffic policy of the internal controller service. Set to "Local" to preserve source IP on providers supporting it.# Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ipexternalTrafficPolicy: ""# -- Session affinity of the internal controller service. Must be either "None" or "ClientIP" if set. Defaults to "None".# Ref: https://kubernetes.io/docs/reference/networking/virtual-ips/#session-affinitysessionAffinity: ""# -- Specifies the health check node port (numeric port number) for the internal controller service.# If not specified, the service controller allocates a port from your cluster's node port range.# Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip# healthCheckNodePort: 0# -- Represents the dual-stack capabilities of the internal controller service. Possible values are SingleStack, PreferDualStack or RequireDualStack.# Fields `ipFamilies` and `clusterIP` depend on the value of this field.# Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#servicesipFamilyPolicy: SingleStack# -- List of IP families (e.g. IPv4, IPv6) assigned to the internal controller service. This field is usually assigned automatically based on cluster configuration and the `ipFamilyPolicy` field.# Ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#servicesipFamilies:- IPv4ports: {}# -- Port the internal HTTP listener is published with.# Defaults to the value of `controller.service.ports.http`.# http: 80# -- Port the internal HTTPS listener is published with.# Defaults to the value of `controller.service.ports.https`.# https: 443targetPorts: {}# -- Port of the ingress controller the internal HTTP listener is mapped to.# Defaults to the value of `controller.service.targetPorts.http`.# http: http# -- Port of the ingress controller the internal HTTPS listener is mapped to.# Defaults to the value of `controller.service.targetPorts.https`.# https: https# -- Declare the app protocol of the internal HTTP and HTTPS listeners or not. Supersedes provider-specific annotations for declaring the backend protocol.# Ref: https://kubernetes.io/docs/concepts/services-networking/service/#application-protocolappProtocol: truenodePorts:# -- Node port allocated for the internal HTTP listener. If left empty, the service controller allocates one from the configured node port range.http: ""# -- Node port allocated for the internal HTTPS listener. If left empty, the service controller allocates one from the configured node port range.https: ""# -- Node port mapping for internal TCP listeners. If left empty, the service controller allocates them from the configured node port range.# Example:# tcp:# 8080: 30080tcp: {}# -- Node port mapping for internal UDP listeners. If left empty, the service controller allocates them from the configured node port range.# Example:# udp:# 53: 30053udp: {}# shareProcessNamespace enables process namespace sharing within the pod.# This can be used for example to signal log rotation using `kill -USR1` from a sidecar.shareProcessNamespace: false# -- Additional containers to be added to the controller pod.# See https://github.com/lemonldap-ng-controller/lemonldap-ng-controller as example.extraContainers: []# - name: my-sidecar# image: nginx:latest# - name: lemonldap-ng-controller# image: lemonldapng/lemonldap-ng-controller:0.2.0# args:# - /lemonldap-ng-controller# - --alsologtostderr# - --configmap=$(POD_NAMESPACE)/lemonldap-ng-configuration# env:# - name: POD_NAME# valueFrom:# fieldRef:# fieldPath: metadata.name# - name: POD_NAMESPACE# valueFrom:# fieldRef:# fieldPath: metadata.namespace# volumeMounts:# - name: copy-portal-skins# mountPath: /srv/var/lib/lemonldap-ng/portal/skins# -- Additional volumeMounts to the controller main container.extraVolumeMounts: []# - name: copy-portal-skins# mountPath: /var/lib/lemonldap-ng/portal/skins# -- Additional volumes to the controller pod.extraVolumes: []# - name: copy-portal-skins# emptyDir: {}# -- Containers, which are run before the app containers are started.extraInitContainers: []# - name: init-myservice# image: busybox# command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']# -- Modules, which are mounted into the core nginx image. See values.yaml for a sample to add opentelemetry moduleextraModules: []# - name: mytestmodule# image:# registry: registry.k8s.io# image: ingress-nginx/mytestmodule# ## for backwards compatibility consider setting the full image url via the repository value below# ## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail# ## repository:# tag: "v1.0.0"# digest: ""# distroless: false# containerSecurityContext:# runAsNonRoot: true# runAsUser: <user-id># allowPrivilegeEscalation: false# seccompProfile:# type: RuntimeDefault# capabilities:# drop:# - ALL# readOnlyRootFilesystem: true# resources: {}## The image must contain a `/usr/local/bin/init_module.sh` executable, which# will be executed as initContainers, to move its config files within the# mounted volume.opentelemetry:enabled: falsename: opentelemetryimage:registry: registry.k8s.ioimage: ingress-nginx/opentelemetry## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: "v20230721-3e2062ee5"digest: sha256:13bee3f5223883d3ca62fee7309ad02d22ec00ff0d7033e3e9aca7a9f60fd472distroless: truecontainerSecurityContext:runAsNonRoot: true# -- The image's default user, inherited from its base image `cgr.dev/chainguard/static`.runAsUser: 65532allowPrivilegeEscalation: falseseccompProfile:type: RuntimeDefaultcapabilities:drop:- ALLreadOnlyRootFilesystem: trueresources: {}admissionWebhooks:name: admissionannotations: {}# ignore-check.kube-linter.io/no-read-only-rootfs: "This deployment needs write access to root filesystem".## Additional annotations to the admission webhooks.## These annotations will be added to the ValidatingWebhookConfiguration and## the Jobs Spec of the admission webhooks.enabled: true# -- Additional environment variables to setextraEnvs: []# extraEnvs:# - name: FOO# valueFrom:# secretKeyRef:# key: FOO# name: secret-resource# -- Admission Webhook failure policy to usefailurePolicy: Fail# timeoutSeconds: 10port: 8443certificate: "/usr/local/certificates/cert"key: "/usr/local/certificates/key"namespaceSelector: {}objectSelector: {}# -- Labels to be added to admission webhookslabels: {}# -- Use an existing PSP instead of creating oneexistingPsp: ""service:annotations: {}# clusterIP: ""externalIPs: []# loadBalancerIP: ""loadBalancerSourceRanges: []servicePort: 443type: ClusterIPcreateSecretJob:name: create# -- Security context for secret creation containerssecurityContext:runAsNonRoot: truerunAsUser: 65532allowPrivilegeEscalation: falseseccompProfile:type: RuntimeDefaultcapabilities:drop:- ALLreadOnlyRootFilesystem: trueresources: {}# limits:# cpu: 10m# memory: 20Mi# requests:# cpu: 10m# memory: 20MipatchWebhookJob:name: patch# -- Security context for webhook patch containerssecurityContext:runAsNonRoot: truerunAsUser: 65532allowPrivilegeEscalation: falseseccompProfile:type: RuntimeDefaultcapabilities:drop:- ALLreadOnlyRootFilesystem: trueresources: {}patch:enabled: trueimage:registry: registry.k8s.ioimage: ingress-nginx/kube-webhook-certgen## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: v1.4.1#digest: sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366 # 注释掉pullPolicy: IfNotPresent# -- Provide a priority class name to the webhook patching job##priorityClassName: ""podAnnotations: {}# NetworkPolicy for webhook patchnetworkPolicy:# -- Enable 'networkPolicy' or notenabled: falsenodeSelector:kubernetes.io/os: linuxtolerations: []# -- Labels to be added to patch job resourceslabels: {}# -- Security context for secret creation & webhook patch podssecurityContext: {}# Use certmanager to generate webhook certscertManager:enabled: false# self-signed root certificaterootCert:# default to be 5yduration: ""admissionCert:# default to be 1yduration: ""# issuerRef:# name: "issuer"# kind: "ClusterIssuer"metrics:port: 10254portName: metrics# if this port is changed, change healthz-port: in extraArgs: accordinglyenabled: falseservice:annotations: {}# prometheus.io/scrape: "true"# prometheus.io/port: "10254"# -- Labels to be added to the metrics service resourcelabels: {}# clusterIP: ""# -- List of IP addresses at which the stats-exporter service is available## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips##externalIPs: []# loadBalancerIP: ""loadBalancerSourceRanges: []servicePort: 10254type: ClusterIP# externalTrafficPolicy: ""# nodePort: ""serviceMonitor:enabled: falseadditionalLabels: {}annotations: {}## The label to use to retrieve the job name from.## jobLabel: "app.kubernetes.io/name"namespace: ""namespaceSelector: {}## Default: scrape .Release.Namespace or namespaceOverride only## To scrape all, use the following:## namespaceSelector:## any: truescrapeInterval: 30s# honorLabels: truetargetLabels: []relabelings: []metricRelabelings: []prometheusRule:enabled: falseadditionalLabels: {}# namespace: ""rules: []# # These are just examples rules, please adapt them to your needs# - alert: NGINXConfigFailed# expr: count(nginx_ingress_controller_config_last_reload_successful == 0) > 0# for: 1s# labels:# severity: critical# annotations:# description: bad ingress config - nginx config test failed# summary: uninstall the latest ingress changes to allow config reloads to resume# # By default a fake self-signed certificate is generated as default and# # it is fine if it expires. If `--default-ssl-certificate` flag is used# # and a valid certificate passed please do not filter for `host` label!# # (i.e. delete `{host!="_"}` so also the default SSL certificate is# # checked for expiration)# - alert: NGINXCertificateExpiry# expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds{host!="_"}) by (host) - time()) < 604800# for: 1s# labels:# severity: critical# annotations:# description: ssl certificate(s) will expire in less then a week# summary: renew expiring certificates to avoid downtime# - alert: NGINXTooMany500s# expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"5.+"} ) / sum(nginx_ingress_controller_requests) ) > 5# for: 1m# labels:# severity: warning# annotations:# description: Too many 5XXs# summary: More than 5% of all requests returned 5XX, this requires your attention# - alert: NGINXTooMany400s# expr: 100 * ( sum( nginx_ingress_controller_requests{status=~"4.+"} ) / sum(nginx_ingress_controller_requests) ) > 5# for: 1m# labels:# severity: warning# annotations:# description: Too many 4XXs# summary: More than 5% of all requests returned 4XX, this requires your attention# -- Improve connection draining when ingress controller pod is deleted using a lifecycle hook:# With this new hook, we increased the default terminationGracePeriodSeconds from 30 seconds# to 300, allowing the draining of connections up to five minutes.# If the active connections end before that, the pod will terminate gracefully at that time.# To effectively take advantage of this feature, the Configmap feature# worker-shutdown-timeout new value is 240s instead of 10s.##lifecycle:preStop:exec:command:- /wait-shutdownpriorityClassName: ""
# -- Rollback limit
##
revisionHistoryLimit: 10
## Default 404 backend
##
defaultBackend:##enabled: falsename: defaultbackendimage:registry: registry.k8s.ioimage: defaultbackend-amd64## for backwards compatibility consider setting the full image url via the repository value below## use *either* current default registry/image or repository format or installing chart by providing the values.yaml will fail## repository:tag: "1.5"pullPolicy: IfNotPresentrunAsNonRoot: true# nobody user -> uid 65534runAsUser: 65534allowPrivilegeEscalation: falseseccompProfile:type: RuntimeDefaultreadOnlyRootFilesystem: true# -- Use an existing PSP instead of creating oneexistingPsp: ""extraArgs: {}serviceAccount:create: truename: ""automountServiceAccountToken: true# -- Additional environment variables to set for defaultBackend podsextraEnvs: []port: 8080## Readiness and liveness probes for default backend## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/##livenessProbe:failureThreshold: 3initialDelaySeconds: 30periodSeconds: 10successThreshold: 1timeoutSeconds: 5readinessProbe:failureThreshold: 6initialDelaySeconds: 0periodSeconds: 5successThreshold: 1timeoutSeconds: 5# -- The update strategy to apply to the Deployment or DaemonSet##updateStrategy: {}# rollingUpdate:# maxUnavailable: 1# type: RollingUpdate# -- `minReadySeconds` to avoid killing pods before we are ready##minReadySeconds: 0# -- Node tolerations for server scheduling to nodes with taints## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/##tolerations: []# - key: "key"# operator: "Equal|Exists"# value: "value"# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"affinity: {}# -- Security context for default backend podspodSecurityContext: {}# -- Security context for default backend containerscontainerSecurityContext: {}# -- Labels to add to the pod container metadatapodLabels: {}# key: value# -- Node labels for default backend pod assignment## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/##nodeSelector:kubernetes.io/os: linux# -- Annotations to be added to default backend pods##podAnnotations: {}replicaCount: 1minAvailable: 1resources: {}# limits:# cpu: 10m# memory: 20Mi# requests:# cpu: 10m# memory: 20MiextraVolumeMounts: []## Additional volumeMounts to the default backend container.# - name: copy-portal-skins# mountPath: /var/lib/lemonldap-ng/portal/skinsextraVolumes: []## Additional volumes to the default backend pod.# - name: copy-portal-skins# emptyDir: {}extraConfigMaps: []## Additional configmaps to the default backend pod.# - name: my-extra-configmap-1# labels:# type: config-1# data:# extra_file_1.html: |# <!-- Extra HTML content for ConfigMap 1 --># - name: my-extra-configmap-2# labels:# type: config-2# data:# extra_file_2.html: |# <!-- Extra HTML content for ConfigMap 2 -->autoscaling:annotations: {}enabled: falseminReplicas: 1maxReplicas: 2targetCPUUtilizationPercentage: 50targetMemoryUtilizationPercentage: 50# NetworkPolicy for default backend component.networkPolicy:# -- Enable 'networkPolicy' or notenabled: falseservice:annotations: {}# clusterIP: ""# -- List of IP addresses at which the default backend service is available## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips##externalIPs: []# loadBalancerIP: ""loadBalancerSourceRanges: []servicePort: 80type: ClusterIPpriorityClassName: ""# -- Labels to be added to the default backend resourceslabels: {}
## Enable RBAC as per https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/rbac.md and https://github.com/kubernetes/ingress-nginx/issues/266
rbac:create: truescope: false
## If true, create & use Pod Security Policy resources
## https://kubernetes.io/docs/concepts/policy/pod-security-policy/
podSecurityPolicy:enabled: false
serviceAccount:create: truename: ""automountServiceAccountToken: true# -- Annotations for the controller service accountannotations: {}
# -- Optional array of imagePullSecrets containing private registry credentials
## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets: []
# - name: secretName# -- TCP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
tcp: {}
# "8080": "default/example-tcp-svc:9000"# -- UDP service key-value pairs
## Ref: https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/exposing-tcp-udp-services.md
##
udp: {}
# "53": "kube-system/kube-dns:53"# -- Prefix for TCP and UDP ports names in ingress controller service
## Some cloud providers, like Yandex Cloud may have a requirements for a port name regex to support cloud load balancer integration
portNamePrefix: ""
# -- (string) A base64-encoded Diffie-Hellman parameter.
# This can be generated with: `openssl dhparam 4096 2> /dev/null | base64`
## Ref: https://github.com/kubernetes/ingress-nginx/tree/main/docs/examples/customization/ssl-dh-param
dhParam: ""
3.3 拉取镜像
# 提前拉取镜像
ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/controller:v1.10.1ctr -n k8s.io images tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/controller:v1.10.1 registry.k8s.io/ingress-nginx/controller:v1.10.1ctr -n k8s.io images pull swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1ctr -n k8s.io images tag swr.cn-north-4.myhuaweicloud.com/ddn-k8s/registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1 registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
3.4 安装
# 创建名称空间
kubectl create ns ingress-nginx# 进入ingress-nginx 文件夹
helm install ingress-nginx . -f values.yaml -n ingress-nginx# 运行成功
[root@kube-master ~]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-79578949b-vbgn7 1/1 Running 0 25s
4. 安装Harbor(k8s 安装)
4.1 下载包
# 添加helm 官方仓库
helm repo add harbor https://helm.goharbor.io# 创建目录
mkdir -p /opt/software/# 下载包
cd /opt/software/ && wget https://helm.goharbor.io/harbor-1.14.3.tgz# 解压
tar -zxvf harbor-1.14.3.tgz
4.2 在安装nfs机器上操作
# nfs配置信息 在192.168.100.150上
# 写入数据
cat >> /etc/exports <<EOF
/data/nfs/harbor/registry *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/jobservice *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/database *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/redis *(rw,no_root_squash,sync,insecure)
/data/nfs/harbor/trivy *(rw,no_root_squash,sync,insecure)
EOFmkdir -p /data/nfs/harbor/registry /data/nfs/harbor/jobservice /data/nfs/harbor/database /data/nfs/harbor/redis /data/nfs/harbor/trivy# 赋予权限
chmod 777 /data/nfs/harbor/*# 重启服务
systemctl restart rpcbind
systemctl restart nfs
4.3 创建PV
# 编写pv 在k8s master 节点上操作
cat >> /opt/software/harbor/harbor.pv.yaml <<EOF
# 第一个
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-registrynamespace: harborlabels:app: harbor-registry
spec:capacity:storage: 50GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: harbor-nfsnfs:path: /data/nfs/harbor/registryserver: 192.168.100.150# 第二个
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-jobservicenamespace: harborlabels:app: harbor-jobservice
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: harbor-nfsnfs:path: /data/nfs/harbor/jobserviceserver: 192.168.100.150# 第三个
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-databasenamespace: harborlabels:app: harbor-database
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: harbor-nfsnfs:path: /data/nfs/harbor/databaseserver: 192.168.100.150# 第四个
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-redisnamespace: harborlabels:app: harbor-redis
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: harbor-nfsnfs:path: /data/nfs/harbor/redisserver: 192.168.100.150# 第五个
---
apiVersion: v1
kind: PersistentVolume
metadata:name: harbor-trivynamespace: harborlabels:app: harbor-trivy
spec:capacity:storage: 10GiaccessModes:- ReadWriteOncepersistentVolumeReclaimPolicy: RetainstorageClassName: harbor-nfsnfs:path: /data/nfs/harbor/trivyserver: 192.168.100.150
EOF
# 创建pv
kubectl apply -f harbor-pv.yaml
4.4 完整配置文件
expose:# Set how to expose the service. Set the type as "ingress", "clusterIP", "nodePort" or "loadBalancer"# and fill the information in the corresponding sectiontype: ingresstls:# Enable TLS or not.# Delete the "ssl-redirect" annotations in "expose.ingress.annotations" when TLS is disabled and "expose.type" is "ingress"# Note: if the "expose.type" is "ingress" and TLS is disabled,# the port must be included in the command when pulling/pushing images.# Refer to https://github.com/goharbor/harbor/issues/5291 for details.enabled: true# The source of the tls certificate. Set as "auto", "secret"# or "none" and fill the information in the corresponding section# 1) auto: generate the tls certificate automatically# 2) secret: read the tls certificate from the specified secret.# The tls certificate can be generated manually or by cert manager# 3) none: configure no tls certificate for the ingress. If the default# tls certificate is configured in the ingress controller, choose this optioncertSource: autoauto:# The common name used to generate the certificate, it's necessary# when the type isn't "ingress"commonName: ""secret:# The name of secret which contains keys named:# "tls.crt" - the certificate# "tls.key" - the private keysecretName: ""ingress:hosts:core: harbor.dongdong.com # 修改成直接的域名# set to the type of ingress controller if it has specific requirements.# leave as `default` for most ingress controllers.# set to `gce` if using the GCE ingress controller# set to `ncp` if using the NCP (NSX-T Container Plugin) ingress controller# set to `alb` if using the ALB ingress controller# set to `f5-bigip` if using the F5 BIG-IP ingress controllercontroller: default## Allow .Capabilities.KubeVersion.Version to be overridden while creating ingresskubeVersionOverride: ""className: ""annotations:# note different ingress controllers may require a different ssl-redirect annotation# for Envoy, use ingress.kubernetes.io/force-ssl-redirect: "true" and remove the nginx lines belowingress.kubernetes.io/ssl-redirect: "true"ingress.kubernetes.io/proxy-body-size: "0"nginx.ingress.kubernetes.io/ssl-redirect: "true"nginx.ingress.kubernetes.io/proxy-body-size: "0"harbor:# harbor ingress-specific annotationsannotations: {}# harbor ingress-specific labelslabels: {}clusterIP:# The name of ClusterIP servicename: harbor# The ip address of the ClusterIP service (leave empty for acquiring dynamic ip)staticClusterIP: ""# Annotations on the ClusterIP serviceannotations: {}ports:# The service port Harbor listens on when serving HTTPhttpPort: 80# The service port Harbor listens on when serving HTTPShttpsPort: 443nodePort:# The name of NodePort servicename: harborports:http:# The service port Harbor listens on when serving HTTPport: 80# The node port Harbor listens on when serving HTTPnodePort: 30002https:# The service port Harbor listens on when serving HTTPSport: 443# The node port Harbor listens on when serving HTTPSnodePort: 30003loadBalancer:# The name of LoadBalancer servicename: harbor# Set the IP if the LoadBalancer supports assigning IPIP: ""ports:# The service port Harbor listens on when serving HTTPhttpPort: 80# The service port Harbor listens on when serving HTTPShttpsPort: 443annotations: {}sourceRanges: []# The external URL for Harbor core service. It is used to
# 1) populate the docker/helm commands showed on portal
# 2) populate the token service URL returned to docker client
#
# Format: protocol://domain[:port]. Usually:
# 1) if "expose.type" is "ingress", the "domain" should be
# the value of "expose.ingress.hosts.core"
# 2) if "expose.type" is "clusterIP", the "domain" should be
# the value of "expose.clusterIP.name"
# 3) if "expose.type" is "nodePort", the "domain" should be
# the IP address of k8s node
#
# If Harbor is deployed behind the proxy, set it as the URL of proxy
externalURL: https://harbor.dongdong.com # 修改成自己的域名# The internal TLS used for harbor components secure communicating. In order to enable https
# in each component tls cert files need to provided in advance.
internalTLS:# If internal TLS enabledenabled: false# enable strong ssl ciphers (default: false)strong_ssl_ciphers: false# There are three ways to provide tls# 1) "auto" will generate cert automatically# 2) "manual" need provide cert file manually in following value# 3) "secret" internal certificates from secretcertSource: "auto"# The content of trust ca, only available when `certSource` is "manual"trustCa: ""# core related cert configurationcore:# secret name for core's tls certssecretName: ""# Content of core's TLS cert file, only available when `certSource` is "manual"crt: ""# Content of core's TLS key file, only available when `certSource` is "manual"key: ""# jobservice related cert configurationjobservice:# secret name for jobservice's tls certssecretName: ""# Content of jobservice's TLS key file, only available when `certSource` is "manual"crt: ""# Content of jobservice's TLS key file, only available when `certSource` is "manual"key: ""# registry related cert configurationregistry:# secret name for registry's tls certssecretName: ""# Content of registry's TLS key file, only available when `certSource` is "manual"crt: ""# Content of registry's TLS key file, only available when `certSource` is "manual"key: ""# portal related cert configurationportal:# secret name for portal's tls certssecretName: ""# Content of portal's TLS key file, only available when `certSource` is "manual"crt: ""# Content of portal's TLS key file, only available when `certSource` is "manual"key: ""# trivy related cert configurationtrivy:# secret name for trivy's tls certssecretName: ""# Content of trivy's TLS key file, only available when `certSource` is "manual"crt: ""# Content of trivy's TLS key file, only available when `certSource` is "manual"key: ""ipFamily:# ipv6Enabled set to true if ipv6 is enabled in cluster, currently it affected the nginx related componentipv6:enabled: true# ipv4Enabled set to true if ipv4 is enabled in cluster, currently it affected the nginx related componentipv4:enabled: true# The persistence is enabled by default and a default StorageClass
# is needed in the k8s cluster to provision volumes dynamically.
# Specify another StorageClass in the "storageClass" or set "existingClaim"
# if you already have existing persistent volumes to use
#
# For storing images and charts, you can also use "azure", "gcs", "s3",
# "swift" or "oss". Set it in the "imageChartStorage" section
persistence:enabled: true# Setting it to "keep" to avoid removing PVCs during a helm delete# operation. Leaving it empty will delete PVCs after the chart deleted# (this does not apply for PVCs that are created for internal database# and redis components, i.e. they are never deleted automatically)resourcePolicy: "keep"persistentVolumeClaim:registry:# Use the existing PVC which must be created manually before bound,# and specify the "subPath" if the PVC is shared with other componentsexistingClaim: ""# Specify the "storageClass" used to provision the volume. Or the default# StorageClass will be used (the default).# Set it to "-" to disable dynamic provisioningstorageClass: "harbor-nfs" # 改成自己的subPath: ""accessMode: ReadWriteOncesize: 50Giannotations: {}jobservice:jobLog:existingClaim: ""storageClass: "harbor-nfs"subPath: ""accessMode: ReadWriteOncesize: 10Giannotations: {}# If external database is used, the following settings for database will# be ignoreddatabase:existingClaim: ""storageClass: "harbor-nfs" # 一定要和pv中的storageClassName 对应subPath: ""accessMode: ReadWriteOncesize: 10Gi # 一定要和pv中的storage 对应 不然绑定不上annotations: {}# If external Redis is used, the following settings for Redis will# be ignoredredis:existingClaim: ""storageClass: "harbor-nfs"subPath: ""accessMode: ReadWriteOncesize: 10Giannotations: {}trivy:existingClaim: ""storageClass: "harbor-nfs"subPath: ""accessMode: ReadWriteOncesize: 10Giannotations: {}# Define which storage backend is used for registry to store# images and charts. Refer to# https://github.com/distribution/distribution/blob/main/docs/configuration.md#storage# for the detail.imageChartStorage:# Specify whether to disable `redirect` for images and chart storage, for# backends which not supported it (such as using minio for `s3` storage type), please disable# it. To disable redirects, simply set `disableredirect` to `true` instead.# Refer to# https://github.com/distribution/distribution/blob/main/docs/configuration.md#redirect# for the detail.disableredirect: false# Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.# The secret must contain keys named "ca.crt" which will be injected into the trust store# of registry's containers.# caBundleSecretName:# Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",# "oss" and fill the information needed in the corresponding section. The type# must be "filesystem" if you want to use persistent volumes for registrytype: filesystemfilesystem:rootdirectory: /storage#maxthreads: 100azure:accountname: accountnameaccountkey: base64encodedaccountkeycontainer: containername#realm: core.windows.net# To use existing secret, the key must be AZURE_STORAGE_ACCESS_KEYexistingSecret: ""gcs:bucket: bucketname# The base64 encoded json file which contains the keyencodedkey: base64-encoded-json-key-file#rootdirectory: /gcs/object/name/prefix#chunksize: "5242880"# To use existing secret, the key must be GCS_KEY_DATAexistingSecret: ""useWorkloadIdentity: falses3:# Set an existing secret for S3 accesskey and secretkey# keys in the secret should be REGISTRY_STORAGE_S3_ACCESSKEY and REGISTRY_STORAGE_S3_SECRETKEY for registry#existingSecret: ""region: us-west-1bucket: bucketname#accesskey: awsaccesskey#secretkey: awssecretkey#regionendpoint: http://myobjects.local#encrypt: false#keyid: mykeyid#secure: true#skipverify: false#v4auth: true#chunksize: "5242880"#rootdirectory: /s3/object/name/prefix#storageclass: STANDARD#multipartcopychunksize: "33554432"#multipartcopymaxconcurrency: 100#multipartcopythresholdsize: "33554432"swift:authurl: https://storage.myprovider.com/v3/authusername: usernamepassword: passwordcontainer: containername# keys in existing secret must be REGISTRY_STORAGE_SWIFT_PASSWORD, REGISTRY_STORAGE_SWIFT_SECRETKEY, REGISTRY_STORAGE_SWIFT_ACCESSKEYexistingSecret: ""#region: fr#tenant: tenantname#tenantid: tenantid#domain: domainname#domainid: domainid#trustid: trustid#insecureskipverify: false#chunksize: 5M#prefix:#secretkey: secretkey#accesskey: accesskey#authversion: 3#endpointtype: public#tempurlcontainerkey: false#tempurlmethods:oss:accesskeyid: accesskeyidaccesskeysecret: accesskeysecretregion: regionnamebucket: bucketname# key in existingSecret must be REGISTRY_STORAGE_OSS_ACCESSKEYSECRETexistingSecret: ""#endpoint: endpoint#internal: false#encrypt: false#secure: true#chunksize: 10M#rootdirectory: rootdirectoryimagePullPolicy: IfNotPresent# Use this set to assign a list of default pullSecrets
imagePullSecrets:
# - name: docker-registry-secret
# - name: internal-registry-secret# The update strategy for deployments with persistent volumes(jobservice, registry): "RollingUpdate" or "Recreate"
# Set it as "Recreate" when "RWM" for volumes isn't supported
updateStrategy:type: RollingUpdate# debug, info, warning, error or fatal
logLevel: info# The initial password of Harbor admin. Change it from portal after launching Harbor
# or give an existing secret for it
# key in secret is given via (default to HARBOR_ADMIN_PASSWORD)
# existingSecretAdminPassword:
existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD
harborAdminPassword: "Harbor12345"# The name of the secret which contains key named "ca.crt". Setting this enables the
# download link on portal to download the CA certificate when the certificate isn't
# generated automatically
caSecretName: ""# The secret key used for encryption. Must be a string of 16 chars.
secretKey: "not-a-secure-key"
# If using existingSecretSecretKey, the key must be secretKey
existingSecretSecretKey: ""# The proxy settings for updating trivy vulnerabilities from the Internet and replicating
# artifacts from/to the registries that cannot be reached directly
proxy:httpProxy:httpsProxy:noProxy: 127.0.0.1,localhost,.local,.internalcomponents:- core- jobservice- trivy# Run the migration job via helm hook
enableMigrateHelmHook: false# The custom ca bundle secret, the secret must contain key named "ca.crt"
# which will be injected into the trust store for core, jobservice, registry, trivy components
# caBundleSecretName: ""## UAA Authentication Options
# If you're using UAA for authentication behind a self-signed
# certificate you will need to provide the CA Cert.
# Set uaaSecretName below to provide a pre-created secret that
# contains a base64 encoded CA Certificate named `ca.crt`.
# uaaSecretName:# If service exposed via "ingress", the Nginx will not be used
nginx:image:repository: goharbor/nginx-photontag: v2.10.3# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsereplicas: 1revisionHistoryLimit: 10# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []nodeSelector: {}tolerations: []affinity: {}# Spread Pods across failure-domains like regions, availability zones or nodestopologySpreadConstraints: []# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# nodeTaintsPolicy: Honor# whenUnsatisfiable: DoNotSchedule## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}## The priority class to run the pod aspriorityClassName:portal:image:repository: goharbor/harbor-portaltag: v2.10.3# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsereplicas: 1revisionHistoryLimit: 10# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []nodeSelector: {}tolerations: []affinity: {}# Spread Pods across failure-domains like regions, availability zones or nodestopologySpreadConstraints: []# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# nodeTaintsPolicy: Honor# whenUnsatisfiable: DoNotSchedule## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}## Additional service annotationsserviceAnnotations: {}## The priority class to run the pod aspriorityClassName:core:image:repository: goharbor/harbor-coretag: v2.10.3# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsereplicas: 1revisionHistoryLimit: 10## Startup probe valuesstartupProbe:enabled: trueinitialDelaySeconds: 10# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []nodeSelector: {}tolerations: []affinity: {}# Spread Pods across failure-domains like regions, availability zones or nodestopologySpreadConstraints: []# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# nodeTaintsPolicy: Honor# whenUnsatisfiable: DoNotSchedule## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}## Additional service annotationsserviceAnnotations: {}## User settings configuration json stringconfigureUserSettings:# The provider for updating project quota(usage), there are 2 options, redis or db.# By default it is implemented by db but you can configure it to redis which# can improve the performance of high concurrent pushing to the same project,# and reduce the database connections spike and occupies.# Using redis will bring up some delay for quota usage updation for display, so only# suggest switch provider to redis if you were ran into the db connections spike around# the scenario of high concurrent pushing to same project, no improvment for other scenes.quotaUpdateProvider: db # Or redis# Secret is used when core server communicates with other components.# If a secret key is not specified, Helm will generate one. Alternatively set existingSecret to use an existing secret# Must be a string of 16 chars.secret: ""# Fill in the name of a kubernetes secret if you want to use your own# If using existingSecret, the key must be secretexistingSecret: ""# Fill the name of a kubernetes secret if you want to use your own# TLS certificate and private key for token encryption/decryption.# The secret must contain keys named:# "tls.key" - the private key# "tls.crt" - the certificatesecretName: ""# If not specifying a preexisting secret, a secret can be created from tokenKey and tokenCert and used instead.# If none of secretName, tokenKey, and tokenCert are specified, an ephemeral key and certificate will be autogenerated.# tokenKey and tokenCert must BOTH be set or BOTH unset.# The tokenKey value is formatted as a multiline string containing a PEM-encoded RSA key, indented one more than tokenKey on the following line.tokenKey: |# If tokenKey is set, the value of tokenCert must be set as a PEM-encoded certificate signed by tokenKey, and supplied as a multiline string, indented one more than tokenCert on the following line.tokenCert: |# The XSRF key. Will be generated automatically if it isn't specifiedxsrfKey: ""# If using existingSecret, the key is defined by core.existingXsrfSecretKeyexistingXsrfSecret: ""# If using existingSecret, the keyexistingXsrfSecretKey: CSRF_KEY## The priority class to run the pod aspriorityClassName:# The time duration for async update artifact pull_time and repository# pull_count, the unit is second. Will be 10 seconds if it isn't set.# eg. artifactPullAsyncFlushDuration: 10artifactPullAsyncFlushDuration:gdpr:deleteUser: falseauditLogsCompliant: falsejobservice:image:repository: goharbor/harbor-jobservicetag: v2.10.3replicas: 1revisionHistoryLimit: 10# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falsemaxJobWorkers: 10# The logger for jobs: "file", "database" or "stdout"jobLoggers:- file# - database# - stdout# The jobLogger sweeper duration (ignored if `jobLogger` is `stdout`)loggerSweeperDuration: 14 #daysnotification:webhook_job_max_retry: 3webhook_job_http_client_timeout: 3 # in secondsreaper:# the max time to wait for a task to finish, if unfinished after max_update_hours, the task will be mark as error, but the task will continue to run, default value is 24max_update_hours: 24# the max time for execution in running state without new task createdmax_dangling_hours: 168# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []nodeSelector: {}tolerations: []affinity: {}# Spread Pods across failure-domains like regions, availability zones or nodestopologySpreadConstraints:# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# nodeTaintsPolicy: Honor# whenUnsatisfiable: DoNotSchedule## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}# Secret is used when job service communicates with other components.# If a secret key is not specified, Helm will generate one.# Must be a string of 16 chars.secret: ""# Use an existing secret resourceexistingSecret: ""# Key within the existing secret for the job service secretexistingSecretKey: JOBSERVICE_SECRET## The priority class to run the pod aspriorityClassName:registry:# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseregistry:image:repository: goharbor/registry-photontag: v2.10.3# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []controller:image:repository: goharbor/harbor-registryctltag: v2.10.3# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []replicas: 1revisionHistoryLimit: 10nodeSelector: {}tolerations: []affinity: {}# Spread Pods across failure-domains like regions, availability zones or nodestopologySpreadConstraints: []# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# nodeTaintsPolicy: Honor# whenUnsatisfiable: DoNotSchedule## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}## The priority class to run the pod aspriorityClassName:# Secret is used to secure the upload state from client# and registry storage backend.# See: https://github.com/distribution/distribution/blob/main/docs/configuration.md#http# If a secret key is not specified, Helm will generate one.# Must be a string of 16 chars.secret: ""# Use an existing secret resourceexistingSecret: ""# Key within the existing secret for the registry service secretexistingSecretKey: REGISTRY_HTTP_SECRET# If true, the registry returns relative URLs in Location headers. The client is responsible for resolving the correct URL.relativeurls: falsecredentials:username: "harbor_registry_user"password: "harbor_registry_password"# If using existingSecret, the key must be REGISTRY_PASSWD and REGISTRY_HTPASSWDexistingSecret: ""# Login and password in htpasswd string format. Excludes `registry.credentials.username` and `registry.credentials.password`. May come in handy when integrating with tools like argocd or flux. This allows the same line to be generated each time the template is rendered, instead of the `htpasswd` function from helm, which generates different lines each time because of the salt.# htpasswdString: $apr1$XLefHzeG$Xl4.s00sMSCCcMyJljSZb0 # example stringhtpasswdString: ""middleware:enabled: falsetype: cloudFrontcloudFront:baseurl: example.cloudfront.netkeypairid: KEYPAIRIDduration: 3000sipfilteredby: none# The secret key that should be present is CLOUDFRONT_KEY_DATA, which should be the encoded private key# that allows access to CloudFrontprivateKeySecret: "my-secret"# enable purge _upload directoriesupload_purging:enabled: true# remove files in _upload directories which exist for a period of time, default is one week.age: 168h# the interval of the purge operationsinterval: 24hdryrun: falsetrivy:# enabled the flag to enable Trivy scannerenabled: trueimage:# repository the repository for Trivy adapter imagerepository: goharbor/trivy-adapter-photon# tag the tag for Trivy adapter imagetag: v2.10.3# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: false# replicas the number of Pod replicasreplicas: 1# debugMode the flag to enable Trivy debug mode with more verbose scanning logdebugMode: false# vulnType a comma-separated list of vulnerability types. Possible values are `os` and `library`.vulnType: "os,library"# severity a comma-separated list of severities to be checkedseverity: "UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL"# ignoreUnfixed the flag to display only fixed vulnerabilitiesignoreUnfixed: false# insecure the flag to skip verifying registry certificateinsecure: false# gitHubToken the GitHub access token to download Trivy DB## Trivy DB contains vulnerability information from NVD, Red Hat, and many other upstream vulnerability databases.# It is downloaded by Trivy from the GitHub release page https://github.com/aquasecurity/trivy-db/releases and cached# in the local file system (`/home/scanner/.cache/trivy/db/trivy.db`). In addition, the database contains the update# timestamp so Trivy can detect whether it should download a newer version from the Internet or use the cached one.# Currently, the database is updated every 12 hours and published as a new release to GitHub.## Anonymous downloads from GitHub are subject to the limit of 60 requests per hour. Normally such rate limit is enough# for production operations. If, for any reason, it's not enough, you could increase the rate limit to 5000# requests per hour by specifying the GitHub access token. For more details on GitHub rate limiting please consult# https://developer.github.com/v3/#rate-limiting## You can create a GitHub token by following the instructions in# https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-linegitHubToken: ""# skipUpdate the flag to disable Trivy DB downloads from GitHub## You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.# If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the# `/home/scanner/.cache/trivy/db/trivy.db` path.skipUpdate: false# skipJavaDBUpdate If the flag is enabled you have to manually download the `trivy-java.db` file and mount it in the# `/home/scanner/.cache/trivy/java-db/trivy-java.db` path#skipJavaDBUpdate: false # The offlineScan option prevents Trivy from sending API requests to identify dependencies.## Scanning JAR files and pom.xml may require Internet access for better detection, but this option tries to avoid it.# For example, the offline mode will not try to resolve transitive dependencies in pom.xml when the dependency doesn't# exist in the local repositories. It means a number of detected vulnerabilities might be fewer in offline mode.# It would work if all the dependencies are in local.# This option doesn’t affect DB download. You need to specify skipUpdate as well as offlineScan in an air-gapped environment.offlineScan: false# Comma-separated list of what security issues to detect. Possible values are `vuln`, `config` and `secret`. Defaults to `vuln`.securityCheck: "vuln"# The duration to wait for scan completiontimeout: 5m0sresources:requests:cpu: 200mmemory: 512Milimits:cpu: 1memory: 1GiextraEnvVars: []nodeSelector: {}tolerations: []affinity: {}# Spread Pods across failure-domains like regions, availability zones or nodestopologySpreadConstraints: []# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# nodeTaintsPolicy: Honor# whenUnsatisfiable: DoNotSchedule## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}## The priority class to run the pod aspriorityClassName:database:# if external database is used, set "type" to "external"# and fill the connection information in "external" sectiontype: internalinternal:# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:repository: goharbor/harbor-dbtag: v2.10.3# The initial superuser password for internal databasepassword: "changeit"# The size limit for Shared memory, pgSQL use it for shared_buffer# More details see:# https://github.com/goharbor/harbor/issues/15034shmSizeLimit: 512Mi# resources:# requests:# memory: 256Mi# cpu: 100m# The timeout used in livenessProbe; 1 to 5 secondslivenessProbe:timeoutSeconds: 1# The timeout used in readinessProbe; 1 to 5 secondsreadinessProbe:timeoutSeconds: 1extraEnvVars: []nodeSelector: {}tolerations: []affinity: {}## The priority class to run the pod aspriorityClassName:initContainer:migrator: {}# resources:# requests:# memory: 128Mi# cpu: 100mpermissions: {}# resources:# requests:# memory: 128Mi# cpu: 100mexternal:host: "192.168.0.1"port: "5432"username: "user"password: "password"coreDatabase: "registry"# if using existing secret, the key must be "password"existingSecret: ""# "disable" - No SSL# "require" - Always SSL (skip verification)# "verify-ca" - Always SSL (verify that the certificate presented by the# server was signed by a trusted CA)# "verify-full" - Always SSL (verify that the certification presented by the# server was signed by a trusted CA and the server host name matches the one# in the certificate)sslmode: "disable"# The maximum number of connections in the idle connection pool per pod (core+exporter).# If it <=0, no idle connections are retained.maxIdleConns: 100# The maximum number of open connections to the database per pod (core+exporter).# If it <= 0, then there is no limit on the number of open connections.# Note: the default number of connections is 1024 for postgre of harbor.maxOpenConns: 900## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}redis:# if external Redis is used, set "type" to "external"# and fill the connection information in "external" sectiontype: internalinternal:# set the service account to be used, default if left emptyserviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:repository: goharbor/redis-photontag: v2.10.3# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []nodeSelector: {}tolerations: []affinity: {}## The priority class to run the pod aspriorityClassName:# # jobserviceDatabaseIndex defaults to "1"# # registryDatabaseIndex defaults to "2"# # trivyAdapterIndex defaults to "5"# # harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional# # cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optionaljobserviceDatabaseIndex: "1"registryDatabaseIndex: "2"trivyAdapterIndex: "5"# harborDatabaseIndex: "6"# cacheLayerDatabaseIndex: "7"external:# support redis, redis+sentinel# addr for redis: <host_redis>:<port_redis># addr for redis+sentinel: <host_sentinel1>:<port_sentinel1>,<host_sentinel2>:<port_sentinel2>,<host_sentinel3>:<port_sentinel3>addr: "192.168.0.2:6379"# The name of the set of Redis instances to monitor, it must be set to support redis+sentinelsentinelMasterSet: ""# The "coreDatabaseIndex" must be "0" as the library Harbor# used doesn't support configuring it# harborDatabaseIndex defaults to "0", but it can be configured to "6", this config is optional# cacheLayerDatabaseIndex defaults to "0", but it can be configured to "7", this config is optionalcoreDatabaseIndex: "0"jobserviceDatabaseIndex: "1"registryDatabaseIndex: "2"trivyAdapterIndex: "5"# harborDatabaseIndex: "6"# cacheLayerDatabaseIndex: "7"# username field can be an empty string, and it will be authenticated against the default userusername: ""password: ""# If using existingSecret, the key must be REDIS_PASSWORDexistingSecret: ""## Additional deployment annotationspodAnnotations: {}## Additional deployment labelspodLabels: {}exporter:replicas: 1revisionHistoryLimit: 10# resources:# requests:# memory: 256Mi# cpu: 100mextraEnvVars: []podAnnotations: {}## Additional deployment labelspodLabels: {}serviceAccountName: ""# mount the service account tokenautomountServiceAccountToken: falseimage:repository: goharbor/harbor-exportertag: v2.10.3nodeSelector: {}tolerations: []affinity: {}# Spread Pods across failure-domains like regions, availability zones or nodestopologySpreadConstraints: []# - maxSkew: 1# topologyKey: topology.kubernetes.io/zone# nodeTaintsPolicy: Honor# whenUnsatisfiable: DoNotSchedulecacheDuration: 23cacheCleanInterval: 14400## The priority class to run the pod aspriorityClassName:metrics:enabled: falsecore:path: /metricsport: 8001registry:path: /metricsport: 8001jobservice:path: /metricsport: 8001exporter:path: /metricsport: 8001## Create prometheus serviceMonitor to scrape harbor metrics.## This requires the monitoring.coreos.com/v1 CRD. Please see## https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/user-guides/getting-started.md##serviceMonitor:enabled: falseadditionalLabels: {}# Scrape interval. If not set, the Prometheus default scrape interval is used.interval: ""# Metric relabel configs to apply to samples before ingestion.metricRelabelings:[]# - action: keep# regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+'# sourceLabels: [__name__]# Relabel configs to apply to samples before ingestion.relabelings:[]# - sourceLabels: [__meta_kubernetes_pod_node_name]# separator: ;# regex: ^(.*)$# targetLabel: nodename# replacement: $1# action: replacetrace:enabled: false# trace provider: jaeger or otel# jaeger should be 1.26+provider: jaeger# set sample_rate to 1 if you wanna sampling 100% of trace data; set 0.5 if you wanna sampling 50% of trace data, and so forthsample_rate: 1# namespace used to differentiate different harbor services# namespace:# attributes is a key value dict contains user defined attributes used to initialize trace provider# attributes:# application: harborjaeger:# jaeger supports two modes:# collector mode(uncomment endpoint and uncomment username, password if needed)# agent mode(uncomment agent_host and agent_port)endpoint: http://hostname:14268/api/traces# username:# password:# agent_host: hostname# export trace data by jaeger.thrift in compact mode# agent_port: 6831otel:endpoint: hostname:4318url_path: /v1/tracescompression: falseinsecure: true# timeout is in secondstimeout: 10# cache layer configurations
# if this feature enabled, harbor will cache the resource
# `project/project_metadata/repository/artifact/manifest` in the redis
# which help to improve the performance of high concurrent pulling manifest.
cache:# default is not enabled.enabled: false# default keep cache for one day.expireHours: 24
4.4 安装
# 需要改的地方
expose:type: ingresstls:enabled: trueingress:hosts:core: harbor.dongdong.comexternalURL: harbor.dongdong.compersistence:enabled: trueresourcePolicy: "keep"persistentVolumeClaim:registry:storageClass: "harbor-nfs" # 一定要和pv中的storageClassName 对应size: 50Gi # 一定要和pv中的storage 对应 不然绑定不上jobservice:storageClass: "harbor-nfs"size: 10Gidatabase:storageClass: "harbor-nfs"size: 10Giredis:storageClass: "harbor-nfs"size: 10Gitrivy:storageClass: "harbor-nfs"size: 10Gi# 创建名称空间
kubectl create ns harbor
# 进入 harbor 文件夹下 运行
helm install harbor . -f values.yaml -n harbor# 标识操作成功
[root@kube-master harbor]# helm install harbor . -f harbor.yaml -n harbor
NAME: harbor
LAST DEPLOYED: Thu Sep 12 20:30:37 2024
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at harbor.dongdong.com
For more details, please visit https://github.com/goharbor/harbor
4.5 查看容器是否运行
# 经过一段时间 发现还要两个容器没启动[root@kube-master harbor]# kubectl get pod -n harbor
NAME READY STATUS RESTARTS AGE
harbor-core-6969ffd694-xd8h4 0/1 Running 4 (58s ago) 6m13s
harbor-database-0 1/1 Running 0 6m13s
harbor-jobservice-79cfc6f696-pzpqj 0/1 CrashLoopBackOff 4 (73s ago) 6m13s
harbor-portal-675d4c5858-sgxw7 1/1 Running 0 6m13s
harbor-redis-0 1/1 Running 0 6m13s
harbor-registry-5859f958cf-vx9fg 2/2 Running 0 6m13s
harbor-trivy-0 1/1 Running 0 6m13skubectl describe pod harbor-core-6969ffd694-xd8h4 -n harbor# 发现 错误信息是
2024-09-12T13:16:36Z [ERROR] [/pkg/config/rest/rest.go:50]: Failed on load rest config err:Get "http://harbor-core:80/api/v2.0/internalconfig": dial tcp 10.103.74.3:80: connect: connection refused, url:http://harbor-core:80/api/v2.0/internalconfig
panic: failed to load configuration, error: failed to load rest config# 我们重启coredns[root@kube-master harbor]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-66f779496c-m4qjx 1/1 Running 4 (5h52m ago) 34d
coredns-66f779496c-s8lxx 1/1 Running 4 (5h52m ago) 34d
etcd-kube-master 1/1 Running 4 (5h52m ago) 34d
kube-apiserver-kube-master 1/1 Running 4 (5h52m ago) 34d
kube-controller-manager-kube-master 1/1 Running 5 (5h52m ago) 34d
kube-proxy-2hrzj 1/1 Running 3 (5h52m ago) 34d
kube-proxy-h6sft 1/1 Running 4 (5h52m ago) 34d
kube-proxy-vcdzq 1/1 Running 3 (5h52m ago) 34d
kube-scheduler-kube-master 1/1 Running 5 (5h52m ago) 34d[root@kube-master harbor]# kubectl delete pod coredns-66f779496c-m4qjx coredns-66f779496c-s8lxx -n kube-system
pod "coredns-66f779496c-m4qjx" deleted
pod "coredns-66f779496c-s8lxx" deleted
相关文章:
Harbor 部署
harbor镜像仓库搭建 版本v2.10.3 文章目录 一. docker 安装 harbor1. harbor 配置http访问1.1 下载harbor二进制包1.2 修改配置文件1.3 运行1.4 访问 2.【可选】harbor 配置https访问2.1 自签证书2.1 修改配置文件2.3 修改hosts文件2.4 运行2.5 访问 二. k8s 安装harbor1 .安装…...
three.js+WebGL踩坑经验合集(6.1):负缩放,负定矩阵和行列式的关系(2D版本)
春节忙完一轮,总算可以继续来写博客了。希望在春节假期结束之前能多更新几篇。 这一篇会偏理论多一点。笔者本没打算在这一系列里面重点讲理论,所以像相机矩阵推导这种网上已经很多优质文章的内容,笔者就一笔带过。 然而关于负缩放…...
开源的瓷砖式图像板系统Pinry
简介 什么是 Pinry ? Pinry 是一个开源的瓷砖式图像板系统,旨在帮助用户轻松保存、标记和分享图像、视频和网页。它提供了一种便于快速浏览的格式,适合喜欢整理和分享多种媒体内容的人。 主要特点 图像抓取和在线预览:支持从网页…...
LabVIEW透镜多参数自动检测系统
在现代制造业中,提升产品质量检测的自动化水平是提高生产效率和准确性的关键。本文介绍了一个基于LabVIEW的透镜多参数自动检测系统,该系统能够在单一工位上完成透镜的多项质量参数检测,并实现透镜的自动搬运与分选,极大地提升了检…...
socket实现HTTP请求,参考HttpURLConnection源码解析
背景 有台服务器,网卡绑定有2个ip地址,分别为: A:192.168.111.201 B:192.168.111.202 在这台服务器请求目标地址 C:192.168.111.203 时必须使用B作为源地址才能访问目标地址C,在这台服务器默认…...
反向代理模块jmh
1 概念 1.1 反向代理概念 反向代理是指以代理服务器来接收客户端的请求,然后将请求转发给内部网络上的服务器,将从服务器上得到的结果返回给客户端,此时代理服务器对外表现为一个反向代理服务器。 对于客户端来说,反向代理就相当…...
安卓(android)实现注册界面【Android移动开发基础案例教程(第2版)黑马程序员】
一、实验目的(如果代码有错漏,可查看源码) 1.掌握LinearLayout、RelativeLayout、FrameLayout等布局的综合使用。 2.掌握ImageView、TextView、EditText、CheckBox、Button、RadioGroup、RadioButton、ListView、RecyclerView等控件在项目中的…...
RubyFPV开源代码之系统简介
RubyFPV开源代码之系统简介 1. 源由2. 工程架构3. 特性介绍(软件)3.1 特性亮点3.2 数字优势3.3 使用功能 4. DEMO推荐(硬件)4.1 天空端4.2 地面端4.3 按键硬件Raspberry PiRadxa 3W/E/C 5. 软件设计6. 参考资料 1. 源由 RubyFPV以…...
解锁维特比算法:探寻复杂系统的最优解密码
引言 在复杂的技术世界中,维特比算法以其独特的魅力和广泛的应用,成为通信、自然语言处理、生物信息学等领域的关键技术。今天,让我们一同深入探索维特比算法的奥秘。 一、维特比算法的诞生背景 维特比算法由安德鲁・维特比在 1967 年提出…...
Unity游戏(Assault空对地打击)开发(2) 基础场景布置
目录 导入插件 文件夹整理 场景布置 山地场景 导入插件 打开【My Assets】(如果你刚进行上篇的操作,该窗口默认已经打开了)。 找到添加的几个插件,点击Download并Import x.x to...。 文件夹整理 我们的目录下多了两个文件夹&a…...
Office / WPS 公式、Mathtype 公式输入花体字、空心字
注:引文主要看注意事项。 1、Office / WPS 公式中字体转换 花体字 字体选择 “Eulid Math One” 空心字 字体选择 “Eulid Math Two” 使用空心字时,一般不用斜体,取消勾选 “斜体”。 2、Mathtype 公式输入花体字、空心字 2.1 直接输…...
代码随想录算法训练营第三十九天-动态规划-213. 打家劫舍 II
与上一题基本一样,只不过房间形成一个环,就需要在首尾考虑状况多一些这不是多一些状况的问题,是完全不知道如何选择的问题这种状况详细分析一下就是要分成三种情况 第一种:不考虑首元素,也不考虑尾元素,只考…...
自然语言处理-词嵌入 (Word Embeddings)
词嵌入(Word Embedding)是一种将单词或短语映射到高维向量空间的技术,使其能够以数学方式表示单词之间的关系。词嵌入能够捕捉语义信息,使得相似的词在向量空间中具有相近的表示。 📌 常见词嵌入方法 基于矩阵分解的方…...
Redis 数据备份与恢复
Redis 数据备份与恢复 引言 Redis 是一款高性能的键值对存储系统,广泛应用于缓存、消息队列、分布式锁等领域。为了保证数据的安全性和可靠性,定期对 Redis 数据进行备份与恢复是至关重要的。本文将详细介绍 Redis 数据备份与恢复的方法,帮助您更好地管理和维护 Redis 数据…...
【leetcode】T541 (两点反思)
解题反思 闷着头往,往往会写成一团浆糊,还推倒重来,谋划好全局思路再开始很重要。 熟悉C的工具库很重要,一开始看到反转就还想着用stack来着,后面突然想起来用reverse函数刚好可以用哇,这题也就迎刃而解了…...
《STL基础之vector、list、deque》
【vector、list、deque导读】vector、list、deque这三种序列式的容器,算是比较的基础容器,也是大家在日常开发中常用到的容器,因为底层用到的数据结构比较简单,笔者就将他们三者放到一起做下对比分析,介绍下基本用法&a…...
嵌入式系统|DMA和SPI
文章目录 DMA(直接内存访问)DMA底层原理1. 关键组件2. 工作机制3. DMA传输模式 SPI(串行外设接口)SPI的基本原理SPI连接示例 DMA与SPI的共同作用 DMA(直接内存访问) 类型:DMA是一种数据传输接口…...
LevelDB 源码阅读:写入键值的工程实现和优化细节
读、写键值是 KV 数据库中最重要的两个操作,LevelDB 中提供了一个 Put 接口,用于写入键值对。使用方法很简单: leveldb::Status status leveldb::DB::Open(options, "./db", &db); status db->Put(leveldb::WriteOptions…...
寒假刷题Day18
一、16. 最接近的三数之和 这一题有负数,没有单调性,不能“大了右指针左移,小了左指针右移,最后存值域求差绝对值”。 class Solution { public:int threeSumClosest(vector<int>& nums, int target) {ranges::sort(n…...
力扣219.存在重复元素每日一题(大年初一)
以一道简单题开启全新的一年 哈希表:我们可以使用 哈希表 来存储数组元素及其对应的索引。通过遍历数组,我们可以检查当前元素是否已经存在于哈希表中,并判断索引差是否满足条件。 具体步骤如下: 创建一个哈希表 map,…...
Midjourney中的强变化、弱变化、局部重绘的本质区别以及其有多逆天的功能
开篇 Midjourney中有3个图片“微调”,它们分别为: 强变化;弱变化;局部重绘; 在Discord里分别都是用命令唤出的,但如今随着AI技术的发达在类似AI可人一类的纯图形化界面中,我们发觉这样的逆天…...
Blazor-选择循环语句
今天我们来说说Blazor选择语句和循环语句。 下面我们以一个简单的例子来讲解相关的语法,我已经创建好了一个Student类,以此类来进行语法的运用 因为我们需要交互性所以我们将类创建在*.client目录下 if 我们做一个学生信息的显示,Gender为…...
根据每月流量和市场份额排名前20 的AI工具列表
ChatGPT:由Open AI研发,是一款对话式大型语言模型。它能够理解自然语言输入,生成连贯且符合逻辑的回复。可用于文本创作,如撰写文章、故事、诗歌;还能解答各种领域的知识问题,提供翻译、代码解释等服务&…...
关于安卓greendao打包时报错问题修复
背景 项目在使用greendao的时候,debug安装没有问题,一到打包签名就报了。 环境 win10 jdk17 gradle8 项目依赖情况 博主的greendao是一个独立的module项目,项目目前只适配了java,不支持Kotlin。然后被外部集成。greendao版本…...
前端面试笔试题目(一)
以下模拟了大厂前端面试流程,并给出了涵盖HTML、CSS、JavaScript等基础和进阶知识的前端笔试题目,以帮助你更好地准备面试。 面试流程模拟 1. 自我介绍(5 - 10分钟):面试官会请你进行简单的自我介绍,包括…...
网络工程师 (10)设备管理
前言 设备管理中的数据传输控制方式是确保设备与内存(或CPU)之间高效、准确地进行数据传送的关键。 一、程序直接控制方式 1.工作原理: 由CPU发出I/O指令,直接控制数据的传输过程。CPU需要不断查询外设的状态,以确定数…...
如何让一个用户具备创建审批流程的权限
最近碰到一个问题,两个sandbox,照理用户的权限应该是一样的,结果开发环境里面我可以左右的做各种管理工作,但是使用change set上传后,另一个环境的同一个用户,没有相对于的权限,权限不足。 当时…...
unity学习23:场景scene相关,场景信息,场景跳转
目录 1 默认场景和Assets里的场景 1.1 scene的作用 1.2 scene作为project的入口 1.3 默认场景 2 场景scene相关 2.1 创建scene 2.2 切换场景 2.3 build中的场景,在构建中包含的场景 (否则会认为是失效的Scene) 2.4 Scenes in Bui…...
【Java高并发】基于任务类型创建不同的线程池
文章目录 一. 按照任务类型对线程池进行分类1. IO密集型任务的线程数2. CPU密集型任务的线程数3. 混合型任务的线程数 二. 线程数越多越好吗三. Redis 单线程的高效性 使用线程池的好处主要有以下三点: 降低资源消耗:线程是稀缺资源,如果无限…...
全网首发,MacMiniA1347安装飞牛最新系统0.8.36,改造双盘位NAS,超详细.36,改造双盘位nas,超详细
全网首发,MacMiniA1347安装飞牛最新系统0.8.36,改造双盘位NAS,超详细 小伙伴们大家好呀,勤奋的凯尔森同学又双叒叕来啦,今天这一期也是有点特别,我们把MacMiniA1347安装飞牛最新系统0.8.36,并且…...
简要介绍C++中的 max 和 min 函数以及返回值
简要介绍C中的 max 和 min 函数 在C中,std::max 和 std::min 是标准库 <algorithm> 中提供的函数,用于比较两个或多个值并返回最大值或最小值。这些函数非常强大且灵活,支持多种数据类型(如整数、浮点数、字符串等ÿ…...
【基于SprintBoot+Mybatis+Mysql】电脑商城项目之用户注册
🧸安清h:个人主页 🎥个人专栏:【计算机网络】【Mybatis篇】 🚦作者简介:一个有趣爱睡觉的intp,期待和更多人分享自己所学知识的真诚大学生。 目录 🎯项目基本介绍 🚦项…...
记忆化搜索(5题)
是什么? 是一个带备忘录的递归 如何实现记忆化搜索 1.添加一个备忘录(建立一个可变参数和返回值的映射关系) 2.递归每次返回的时候把结果放到备忘录里 3.在每次进入递归的时候往备忘录里面看看。 目录 1.斐波那契数列 2.不同路径 3.最…...
强化学习笔记——4策略迭代、值迭代、TD算法
基于策略迭代的贝尔曼方程和基于值迭代的贝尔曼方程,关系还是不太理解 首先梳理一下: 通过贝尔曼方程将强化学习转化为值迭代和策略迭代两种问题 求解上述两种贝尔曼方程有三种方法:DP(有模型),MCÿ…...
nginx目录结构和配置文件
nginx目录结构 [rootlocalhost ~]# tree /usr/local/nginx /usr/local/nginx ├── client_body_temp # POST 大文件暂存目录 ├── conf # Nginx所有配置文件的目录 │ ├── fastcgi.conf # fastcgi相关参…...
Spring RESTful API 设计与实现
Spring RESTful API的设计与实现极大地提升了开发效率和系统可维护性,通过遵循RESTful设计原则,使得API结构清晰、行为一致,便于扩展和维护。它在构建微服务架构中扮演着核心角色,支持松耦合的通信,同时通过标准的HTTP协议和数据格式增强了系统的互操作性。结合Spring Sec…...
【玩转全栈】--创建一个自己的vue项目
目录 vue介绍 创建vue项目 vue页面介绍 element-plus组件库 启动项目 vue介绍 Vue.js 是一款轻量级、易于上手的前端 JavaScript 框架,旨在简化用户界面的开发。它采用了响应式数据绑定和组件化的设计理念,使得开发者可以通过声明式的方式轻松管理数据和…...
【Envi遥感图像处理】008:波段(批量)分离与波段合成
文章目录 一、波段分离提取1. 提取单个波段2. 批量提取单个波段二、波段合成相关阅读:【ArcGIS微课1000例】0058:波段合成(CompositeBands)工具的使用 一、波段分离提取 1. 提取单个波段...
数据结构-Stack和栈
1.栈 1.1什么是栈 栈是一种特殊的线性表,只允许在固定的一段进行插入和删除操作,进行插入和删除操作的一段称为栈顶,另一端称为栈底。 栈中的数据元素遵顼后进先出LIFO(Last In First Out)的原则,就像一…...
内容检索(2025.01.30)
随着创作数量的增加,博客文章所涉及的内容越来越庞杂,为了更为方便地阅读,后续更新发布的文章将陆续在此汇总并附上原文链接,感兴趣的小伙伴们可持续关注文章发布动态! 博客域名:http://my-signal.blog.cs…...
牛客周赛 Round 77
题目目录 C-小红走网格解题思路参考代码 D-隐匿社交网络解题思路参考代码 F-计树解题思路参考代码 C-小红走网格 解题思路 根据裴蜀定理:设a,b是不全为0的整数,对任意整数x,y,满足gcd(a,b&…...
c++面试:类定义为什么可以放到头文件中
这个问题是刚了解预编译的时候产生的疑惑。 声明是指向编译器告知某个变量、函数或类的存在及其类型,但并不分配实际的存储空间。声明的主要目的是让编译器知道如何解析程序中的符号引用。定义不仅告诉编译器实体的存在,还会为该实体分配存储空间&#…...
Oracle查看数据库表空间使用情况
Oracle RAC环境查看表空间使用情况 查询字段释义: NEED_ADDFILE,--是否需增加表空间文件 TABLESPACE_NAME,--表空间名称 TABLESPACE_FILE_COUNT, --表空间当前数据文件数量 NOW_FILEENABLE_BLOCKS,--表空间文件当前数据块数 NOW_FILEENABLE_BYTES_GB,--表空间文件当…...
Spring Boot 热部署实现指南
在开发 Spring Bot 项目时,热部署功能能够显著提升开发效率,让开发者无需频繁重启服务器就能看到代码修改后的效果。下面为大家详细介绍一种实现 Spring Boot 热部署的方法,同时也欢迎大家补充其他实现形式。 步骤一、开启 IDEA 自动编译功能…...
如何构建ObjC语言编译环境?构建无比简洁的clang编译ObjC环境?Windows搭建Swift语言编译环境?
如何构建ObjC语言编译环境? 除了在线ObjC编译器,本地环境Windows/Mac/Linux均可以搭建ObjC编译环境。 Mac自然不用多说,ObjC是亲儿子。(WSL Ubuntu 22.04) Ubuntu可以安装gobjc/gnustep和gnustep-devel构建编译环境。 sudo apt-get install gobjc gnus…...
C++——类和对象(下)
1.初始化列表 之前我们实现构造函数时,初始化成员变量主要使用函数体内赋值,构造函数初始化还有一种方式,就是初始化列表,初始化列表的使用方式是以一个冒号开始,接着是一个以逗号分隔的数据成员列表,每个…...
R 字符串:深入理解与高效应用
R 字符串:深入理解与高效应用 引言 在R语言中,字符串是数据处理和编程中不可或缺的一部分。无论是数据清洗、数据转换还是数据分析,字符串的处理都是基础技能。本文将深入探讨R语言中的字符串概念,包括其基本操作、常见函数以及高效应用方法。 字符串基本概念 字符串定…...
C#面试常考随笔7:什么是匿名⽅法?还有Lambda表达式?
匿名方法本质上是一种没有显式名称的方法,它可以作为参数传递给需要委托类型的方法,常用于事件处理、回调函数等场景,能够让代码更加简洁和紧凑。 使用场景 事件处理:在处理事件时,不需要为每个事件处理程序单独定义…...
舵机型号与识别
舵机型号繁多,不同品牌和制造商有不同的命名规则。常见的舵机品牌包括 Futaba、Hitec、Tower Pro、Savox、JX Servo 等。以下是舵机型号的常见识别方法以及一些典型的型号示例: 一、舵机型号的识别方法 型号命名规则: 舵机型号通常由字母和数…...
【memgpt】letta 课程6: 多agent编排
Lab 6: Multi-Agent Orchestration 多代理协作 letta 是作为一个服务存在的,app通过restful api 通信 多智能体之间如何协调与沟通? 相互发送消息共享内存块,让代理同步到不同的服务的内存块...