当前位置: 首页 > news >正文

Docker 运维管理

Docker 运维管理

  • 一、Swarm集群管理
    • 1.1 Swarm的核心概念
      • 1.1.1 集群
      • 1.1.2 节点
      • 1.1.3 服务和任务
      • 1.1.4 负载均衡
    • 1.2 Swarm安装
      • 准备工作
      • 创建集群
      • 添加工作节点到集群
      • 发布服务到集群
      • 扩展一个或多个服务
      • 从集群中删除服务
      • ssh免密登录
  • 二、Docker Compose
    • 与 Swarm 一起使用 Compose
  • 三、配置私有仓库(Harbor)
    • 3.1 环境准备
    • 3.2 安装 Docker
    • 3.3 安装 docker-compose
    • 3.4 准备 Harbor
    • 3.5 配置证书
    • 3.6 部署配置 Harbor
    • 3.7 配置启动服务
    • 3.8 定制本地仓库
    • 3.9 测试本地仓库

一、Swarm集群管理

docker swarm 是 docker 官方提供的一套容器编排系统,是 Docker 公司推出的官方容器集群平台。基于 Go 语言 实现。

架构如下:
在这里插入图片描述

1.1 Swarm的核心概念

1.1.1 集群

一个集群由多个 Docker 主机组成,这些 Docker 主机以集群模式运行,并充当 Manager 和 Worker

1.1.2 节点

swarm 是一系列节点的集合,节点可以是一台裸机或者一台虚拟机。一个节点能扮演一个或者两个角色 Manager 或者 Worker

  • Manager 节点
    Docker Swarm 集群需要至少一个 manager 节点,节点之间使用 Raft consensus protocol 进行协同工作。
    通常,第一个启用 docker swarm 的节点将成为 leader,后来加入的都是 follower。当前的 leader 如果挂掉,剩余的节点将重新选举出一个新的 leader。 每一个 manager 都有一个完整的当前集群状态的副本,可以保证 manager 的高可用

  • Worker 节点
    worker 节点是运行实际应用服务的容器所在的地方。理论上,一个 manager 节点也能同时成为 worker 节点,但在生产环境中,不建议这样做。 worker 节点之间,通过 control plane 进行通信,这种通信使用 gossip 协议,并且是 异步 的。

1.1.3 服务和任务

  • services(服务)
    swarm service 是一个抽象的概念,它只是一个对运行在 swarm 集群上的应用服务,所期望状态的描述。它就像一个描述了下面物品的清单列表一样:
    • 服务名称
    • 使用哪个镜像来创建容器
    • 要运行多少个副本
    • 服务的容器要连接到哪个网络上
    • 应该映射哪些端口
  • task(任务)
    在Docker Swarm中,task 是一个部署的最小单元,task与容器是 一对一 的关系。
  • stack(堆栈)
    stack 是描述一系列相关 services 的集合。我们通过在一个 YAML 文件中来定义一个 stack

1.1.4 负载均衡

集群管理器可以自动为服务分配一个已发布端口,也可以为该服务配置一个已发布端口
可以指定任何未使用的端口。如果未指定端口,则集群管理器会为服务分配30000-32767 范围内的端口。

1.2 Swarm安装

准备工作

[root@docker ~]# hostnamectl hostname manager
[root@docker ~]# exit
[root@manager ~]# nmcli c modify ens160 ipv4.addresses IP地址/24
[root@manager ~]# init 6
[root@manager ~]# cat >> /etc/hosts <<EOF
> 192.168.98.47  manager1
> 192.168.98.48  worker1
> 192.168.98.49  worker2
> EOF
[root@manager ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.678 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.461 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.353 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2035ms
rtt min/avg/max/mdev = 0.353/0.497/0.678/0.135 ms
[root@manager ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.719 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.300 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.417 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2089ms
rtt min/avg/max/mdev = 0.300/0.478/0.719/0.176 ms
[root@manager ~]# ping -c 3 manager1
PING manager1 (192.168.98.47) 56(84) bytes of data.
64 bytes from manager1 (192.168.98.47): icmp_seq=1 ttl=64 time=0.034 ms
64 bytes from manager1 (192.168.98.47): icmp_seq=2 ttl=64 time=0.038 ms
64 bytes from manager1 (192.168.98.47): icmp_seq=3 ttl=64 time=0.035 ms--- manager1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2042ms
rtt min/avg/max/mdev = 0.034/0.035/0.038/0.001 ms[root@worker1 ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.023 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.024 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.035 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2056ms
rtt min/avg/max/mdev = 0.023/0.027/0.035/0.005 ms
[root@worker1 ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.405 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.509 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.381 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2065ms
rtt min/avg/max/mdev = 0.381/0.431/0.509/0.055 ms[root@worker2 ~]# ping -c 3 worker1
PING worker1 (192.168.98.48) 56(84) bytes of data.
64 bytes from worker1 (192.168.98.48): icmp_seq=1 ttl=64 time=0.304 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=2 ttl=64 time=0.346 ms
64 bytes from worker1 (192.168.98.48): icmp_seq=3 ttl=64 time=0.460 ms--- worker1 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2030ms
rtt min/avg/max/mdev = 0.304/0.370/0.460/0.065 ms
[root@worker2 ~]# ping -c 3 worker2
PING worker2 (192.168.98.49) 56(84) bytes of data.
64 bytes from worker2 (192.168.98.49): icmp_seq=1 ttl=64 time=0.189 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=2 ttl=64 time=0.079 ms
64 bytes from worker2 (192.168.98.49): icmp_seq=3 ttl=64 time=0.055 ms--- worker2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2038ms
rtt min/avg/max/mdev = 0.055/0.107/0.189/0.058 ms# 查看版本
docker run --rm swarm -v
swarm version 1.2.9 (527a849)
  • 镜像:
[root@manager ~]# ls
anaconda-ks.cfg
[root@manager ~]# ls
anaconda-ks.cfg  swarm_1.2.9.tar.gz
[root@manager ~]# docker load -i swarm_1.2.9.tar.gz
6104cec23b11: Loading layer  12.44MB/12.44MB
9c4e304108a9: Loading layer  281.1kB/281.1kB
a8731583ab53: Loading layer  2.048kB/2.048kB
Loaded image: swarm:1.2.9
[root@manager ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
swarm        1.2.9     1a5eb59a410f   4 years ago   12.7MB
[root@manager ~]# ls
anaconda-ks.cfg  swarm_1.2.9.tar.gz# 远程传输至worker1和woker2
scp swarm_1.2.9.tar.gz root@192.168.98.48:~
[root@worker1 ~]# ls
anaconda-ks.cfg  swarm_1.2.9.tar.gz

创建集群

语法格式:

 docker swarm init --advertise <MANAGER-IP>	#manager主机的IP
  • manager
[root@manager ~]# docker swarm init --advertise-addr 192.168.98.47
# --advertise-addr 参数表示其它 swarm 中的 worker 节点使用此 ip 地址与 manager 联系
Swarm initialized: current node (bsicfg4mvo18a0tv0z17crtnh) is now a manager.To add a worker to this swarm, run the following command:#token::令牌(令牌不是永久有效的,有时效性)docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
# 将这个命令在 worker 主机上执行
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.# 查看节点状态(都是ready)
[root@manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
bsicfg4mvo18a0tv0z17crtnh *   manager    Ready     Active         Leader           28.0.4
xtc6pax5faoobzqo341vf72rv     worker1    Ready     Active                          28.0.4
ik0tqz8axejwu82mmukwjo47m     worker2    Ready     Active                          28.0.4#查看集群状态
docker info
#将节点强制驱除集群
docker swarm leave --force[root@manager ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS
[root@manager ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

#查看节点信息
docker node ls
#查看集群状态
docker info
#将节点强制驱除集群
docker swarm leave --force

添加工作节点到集群

创建了一个集群与管理器节点,就可以添加工作节点

  • 添加工作节点 worker1
[root@worker1 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
This node joined a swarm as a worker.
  • 添加工作节点 worker2
[root@worker2 ~]# docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377
This node joined a swarm as a worker.
  • 如果忘记了 token 的值,在管理节点 192.168.98.47(manager)主机上执行:
    (令牌不是永久有效的,有时效性)
# 在管理者节点上写
[root@manager ~]# docker swarm join-token worker
To add a worker to this swarm, run the following command:docker swarm join --token SWMTKN-1-2xidivy00h9ts8veuk602dqwwwuhvoerzgklk3tvwcin92wnm9-67wyr4mfs1vwsncmnsco6fmqb 192.168.98.47:2377

发布服务到集群

# 查看节点状态(都是ready)
[root@manager ~]# docker node ls
ID                            HOSTNAME   STATUS    AVAILABILITY   MANAGER STATUS   ENGINE VERSION
bsicfg4mvo18a0tv0z17crtnh *   manager    Ready     Active         Leader           28.0.4
xtc6pax5faoobzqo341vf72rv     worker1    Ready     Active                          28.0.4
ik0tqz8axejwu82mmukwjo47m     worker2    Ready     Active                          28.0.4
  • manager
# docker服务创建 ,--replicas 1:副本数量# 在管理节点 192.168.98.47(manager)主机上执行:			#容器名称			#容器镜像名称
[root@manager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4#创建一个容器作为服务,名称叫做nginx2(类似docker run),如果没有nginx:1.27.4这个文件就会自动去拉取

没有自动拉取就执行以下指令(三台主机都要拉取nginx_1.27.4

[root@manager ~]# docker load -i nginx_1.27.4.tar.gz 
7914c8f600f5: Loading layer  77.83MB/77.83MB
9574fd0ae014: Loading layer  118.3MB/118.3MB
17129ef2de1a: Loading layer  3.584kB/3.584kB
320c70dd6b6b: Loading layer  4.608kB/4.608kB
2ef6413cdcb5: Loading layer   2.56kB/2.56kB
d6266720b0a6: Loading layer   5.12kB/5.12kB
1fb7f1e96249: Loading layer  7.168kB/7.168kB
Loaded image: nginx:1.27.4
[root@manager ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED        SIZE
nginx        1.27.4    97662d24417b   2 months ago   192MB
swarm        1.2.9     1a5eb59a410f   4 years ago    12.7MB
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS
o9atmzaj9on8   nginx2    replicated   1/1        nginx:1.27.4   *:80->80/tcp[root@manager ~]# docker service create --replicas 1 --name nginx2 -p 80:80 nginx:1.27.4
z0j2olxqwrlpm84uho1w1m20p
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service z0j2olxqwrlpm84uho1w1m20p converged scp root@managere:~/nginx 1.27.4.tar.gz
scp root@192.168.98.47:~/nginx 1.27.4.tar.gz
# --pretty:打印得好看一点
[root@manager ~]# docker service inspect --pretty nginx2ID:		o9atmzaj9on8tf9ffmp8k7bun	#容器ID
Name:		nginx2		#容器名称
Service Mode:	Replicated		#模式:ReplicatedReplicas:	1	#副本数
Placement:
UpdateConfig:Parallelism:	1	#限制(变形的方式有一个)On failure:	pauseMonitoring Period: 5sMax failure ratio: 0Update order:      stop-first
RollbackConfig:Parallelism:	1On failure:	pauseMonitoring Period: 5sMax failure ratio: 0Rollback order:    stop-first
ContainerSpec:		#镜像的名称:nginx:1.27.4Image:			nginx:1.27.4@sha256:09369da6b10306312cd908661320086bf87fbae1b6b0c49a1f50ba531fef2eabInit:		false
Resources:
Endpoint Mode:	vip
Ports:PublishedPort = 80Protocol = tcpTargetPort = 80PublishMode = ingress[root@manager ~]# docker service ps nginx2 
ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
id3xjfro1bdl   nginx2.1   nginx:1.27.4   manager   Running         Running 5 minutes ago
#	 ID			名称			镜像			在哪个节点上	是否在运行		运行时间

扩展一个或多个服务

# scale:伸缩
[root@manager ~]# docker service scale nginx2=5
nginx2 scaled to 5
overall progress: 5 out of 5 tasks 
1/5: running   
2/5: running   
3/5: running   
4/5: running   
5/5: running   
verify: Service nginx2 converged [root@manager ~]# docker service ps nginx2
ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE           ERROR     PORTS
rbmlfw6oiyot   nginx2.1   nginx:1.27.4   worker1   Running         Running 9 minutes ago             
uvko24qqf1ps   nginx2.2   nginx:1.27.4   manager   Running         Running 2 minutes ago             
kwjys4eldv2d   nginx2.3   nginx:1.27.4   manager   Running         Running 2 minutes ago             
onnmzewg4ou4   nginx2.4   nginx:1.27.4   worker2   Running         Running 2 minutes ago             
v0we91sbj7p5   nginx2.5   nginx:1.27.4   worker1   Running         Running 2 minutes ago             [root@manager ~]# docker service scale nginx2=1
nginx2 scaled to 1
overall progress: 1 out of 1 tasks 
1/1: running   
verify: Service nginx2 converged 
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS  
z0j2olxqwrlp   nginx2    replicated   1/1        nginx:1.27.4   *:80->80/tcp
[root@manager ~]# docker service scale nginx2=3
nginx2 scaled to 3
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service nginx2 converged 
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS
o9atmzaj9on8   nginx2    replicated   3/3        nginx:1.27.4   *:80->80/tcp
[root@manager ~]# docker service ps nginx2 
ID             NAME       IMAGE          NODE      DESIRED STATE   CURRENT STATE            ERROR     PORTS
id3xjfro1bdl   nginx2.1   nginx:1.27.4   manager   Running         Running 15 minutes ago             
hx7sqjr88ac8   nginx2.2   nginx:1.27.4   worker2   Running         Running 2 minutes ago              
p7x2lyo6vdk8   nginx2.5   nginx:1.27.4   worker1   Running         Running 2 minutes ago  
  • 更新服务:
[root@manager ~]# docker service update --publish-rm 80:80 --publish-add 88:80 nginx2
nginx2
overall progress: 3 out of 3 tasks 
1/3: running   [==================================================>] 
2/3: running   [==================================================>] 
3/3: running   [==================================================>] 
verify: Service nginx2 converged 
[root@manager ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED          STATUS          PORTS     NAMES
78adf323559b   nginx:1.27.4   "/docker-entrypoint.…"   20 seconds ago   Up 17 seconds   80/tcp    nginx2.1.ds6dy33b6m62kzajs9frvdw4g
[root@manager ~]# docker service ps nginx2 
ID             NAME           IMAGE          NODE      DESIRED STATE   CURRENT STATE             ERROR     PORTS
ds6dy33b6m62   nginx2.1       nginx:1.27.4   manager   Running         Running 19 seconds ago              
id3xjfro1bdl    \_ nginx2.1   nginx:1.27.4   manager   Shutdown        Shutdown 20 seconds ago             
rpnodskwk4f7   nginx2.2       nginx:1.27.4   worker2   Running         Running 23 seconds ago              
hx7sqjr88ac8    \_ nginx2.2   nginx:1.27.4   worker2   Shutdown        Shutdown 24 seconds ago             
shdko7kq7vvw   nginx2.5       nginx:1.27.4   worker1   Running         Running 16 seconds ago              
p7x2lyo6vdk8    \_ nginx2.5   nginx:1.27.4   worker1   Shutdown        Shutdown 16 seconds ago             
[root@manager ~]# docker service ls
ID             NAME      MODE         REPLICAS   IMAGE          PORTS
o9atmzaj9on8   nginx2    replicated   3/3        nginx:1.27.4   *:88->80/tcp
  • worker1
[root@worker1 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
21c117b44558   nginx:1.27.4   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx2.5.shdko7kq7vvwqsmzgtm5790s2
  • worker2
[root@worker2 ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS              PORTS     NAMES
a761908c5f2d   nginx:1.27.4   "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp    nginx2.2.rpnodskwk4f79yxpf0tctb5n8

从集群中删除服务

[root@manager ~]# docker service rm nginx2
nginx2
[root@manager ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@manager ~]# docker service ls
ID        NAME      MODE      REPLICAS   IMAGE     PORTS
[root@worker1 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@worker2 ~]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

ssh免密登录

  • ssh-keygen
  • ssh-copy-id root@worker1
  • ssh-copy-id root@worker2
  • ssh worker1
  • exit
[root@manager ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 	#回车
Enter passphrase (empty for no passphrase):  	#回车
Enter same passphrase again:  	#回车
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:6/aBJGgBhXrJ5l541rWVPguq1ao5QLDqe6LfztIxBAw root@manager
The key's randomart image is:
+---[RSA 3072]----+
|Eo.o.            |
|. +.             |
| = o.      .     |
|o * .o  . o      |
|.= oo...S+       |
|. +.* .+ooo      |
|.. * o..+..o     |
| oo+oo.o. ..     |
|oo=oB+.....      |
+----[SHA256]-----+[root@manager ~]# ssh-copy-id root@worker1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@worker1's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'root@worker1'"
and check to make sure that only the key(s) you wanted were added.[root@manager ~]# ssh-copy-id root@worker2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'worker2 (192.168.98.49)' can't be established.
ED25519 key fingerprint is SHA256:I3/lsrnTEnXOE3LFvTLRUXAJ+AhSVrIEWtqTnleRz9w.
This host key is known by the following other names/addresses:~/.ssh/known_hosts:1: manager~/.ssh/known_hosts:4: worker1
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@worker2's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'root@worker2'"
and check to make sure that only the key(s) you wanted were added.[root@manager ~]# ssh worker1
Register this system with Red Hat Insights: rhc connectExample:
# rhc connect --activation-key <key> --organization <org>The rhc client and Red Hat Insights will enable analytics and additional
management capabilities on your system.
View your connected systems at https://console.redhat.com/insightsYou can learn more about how to register your system 
using rhc at https://red.ht/registration
Last login: Thu May  8 16:36:42 2025 from 192.168.98.1
[root@worker1 ~]# exit
logout
Connection to worker1 closed.

二、Docker Compose

拉取 docker-compose-linux-x86_64 文件

  • 创建 docker-compose.yaml 文件
[root@docker ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
[root@docker ~]# ll
total 4
-rw-------. 1 root root 989 Feb 27 16:19 anaconda-ks.cfg
[root@docker ~]# chmod +x /usr/bin/docker-compose 
[root@docker ~]# ll /usr/bin/docker-compose 
-rwxr-xr-x. 1 root root 73699264 May 10 09:55 /usr/bin/docker-compose
[root@docker ~]# docker-compose --version
Docker Compose version v2.35.1
[root@docker ~]# vim docker-compose.yaml
[root@docker ~]# ls
anaconda-ks.cfg  docker-compose.yaml
  • 复制会话,创建需要的宿主机目录
[root@docker ~]# mkdir -p /opt/{nginx,mysql,redis}
[root@docker ~]# mkdir /opt/nginx/{conf,html}
[root@docker ~]# mkdir /opt/mysql/data
[root@docker ~]# mkdir /opt/redis/data
[root@docker ~]# tree /opt
  • 执行
[root@docker ~]# docker-compose up -d
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
[+] Running 26/26? mysql Pulled                                                                                                      91.7s ? c2eb5d06bfea Pull complete                                                                                      16.4s ? ba361f0ba5e7 Pull complete                                                                                      16.4s ? 0e83af98b000 Pull complete                                                                                      16.4s ? 770e931107be Pull complete                                                                                      16.6s ? a2be1b721112 Pull complete                                                                                      16.6s ? 68c594672ed3 Pull complete                                                                                      16.6s ? cfd201189145 Pull complete                                                                                      55.2s ? e9f009c5b388 Pull complete                                                                                      55.3s ? 61a291920391 Pull complete                                                                                      87.0s ? c8604ede059a Pull complete                                                                                      87.0s ? redis Pulled                                                                                                      71.0s ? cd07ede39ddc Pull complete                                                                                      40.5s ? 63df650ee4e0 Pull complete                                                                                      43.1s ? c175c1c9487d Pull complete                                                                                      55.0s ? 91cf9601b872 Pull complete                                                                                      55.0s ? 4f4fb700ef54 Pull complete                                                                                      55.0s ? c70d7dc4bd70 Pull complete                                                                                      55.0s ? nginx Pulled                                                                                                      53.6s ? 254e724d7786 Pull complete                                                                                      20.5s ? 913115292750 Pull complete                                                                                      31.3s ? 3e544d53ce49 Pull complete                                                                                      32.7s ? 4f21ed9ac0c0 Pull complete                                                                                      36.7s ? d38f2ef2d6f2 Pull complete                                                                                      39.7s ? 40a6e9f4e456 Pull complete                                                                                      42.7s ? d3dc5ec71e9d Pull complete                                                                                      45.3s 
[+] Running 3/4? Network root_default  Created                                                                                      0.0s ? Container mysql       Starting                                                                                     0.3s ? Container redis       Started                                                                                      0.3s ? Container nginx       Started                                                                                      0.3s 
Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/opt/mysql/my.cnf" to rootfs at "/etc/my.cnf": create mountpoint for /etc/my.cnf mount: cannot create subdirectories in "/var/lib/docker/overlay2/1a5867fcf9c1f650da4bc51387cbd7621f1464eb2a1ce8d90f90f43733b34602/merged/etc/my.cnf": not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
[root@docker ~]# vim docker-compose.yaml
[root@docker ~]# cat docker-compose.yaml 
version: "3.9"
services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-server=utf8mb4- collation-server=utf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data[root@docker ~]# docker-compose up -d
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
[+] Running 3/3? Container redis  Running                                                                                           0.0s ? Container mysql  Started                                                                                           0.2s ? Container nginx  Running 
[root@docker ~]# docker images
REPOSITORY   TAG       IMAGE ID       CREATED       SIZE
redis        8.0.0     d62dbaef1b81   5 days ago    128MB
nginx        1.27.5    a830707172e8   3 weeks ago   192MB
mysql        9.3.0     2c849dee4ca9   3 weeks ago   859MB
[root@docker ~]# docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED              STATUS                           PORTS                                         NAMES
4390422bf4b1   mysql:9.3.0    "docker-entrypoint.s…"   About a minute ago   Restarting (127) 7 seconds ago                                                 mysql
18713204332c   nginx:1.27.5   "/docker-entrypoint.…"   2 minutes ago        Up 2 minutes                     0.0.0.0:80->80/tcp, [::]:80->80/tcp           nginx
be2489a46e42   redis:8.0.0    "docker-entrypoint.s…"   2 minutes ago        Up 2 minutes                     0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp   redis
  • 访问
[root@docker ~]# curl localhost
curl: (56) Recv failure: Connection reset by peer
[root@docker ~]# echo "index.html" > /opt/nginx/html/index.html
[root@docker ~]# curl http://192.168.98.149
curl: (7) Failed to connect to 192.168.98.149 port 80: Connection refused[root@docker ~]# cd /opt/nginx/conf/
[root@docker conf]# ls
[root@docker conf]# vim web.conf
[root@docker conf]# cat web.conf 
server {listen	80;server_name	192.168.98.149;root		/opt/nginx/html;
}[root@docker conf]# curl http://192.168.98.149
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.27.5</center>
</body>
</html>[root@docker conf]# vim web.conf
[root@docker conf]# cat web.conf 
server {listen	80;server_name	192.168.98.149;root		/usr/share/nginx/html;
}
[root@docker conf]# docker restart nginx
nginx
[root@docker conf]# curl http://192.168.98.149
index.html
  • 必须在有 docker-compose.yaml 文件下的路径执行
[root@docker ~]# docker-compose ps
WARN[0000] /root/docker-compose.yaml: the attribute `version` is obsolete, it will be ignored, please remove it to avoid potential confusion 
NAME      IMAGE          COMMAND                  SERVICE   CREATED          STATUS                            PORTS
mysql     mysql:9.3.0    "docker-entrypoint.s…"   mysql     21 minutes ago   Restarting (127) 52 seconds ago   
nginx     nginx:1.27.5   "/docker-entrypoint.…"   nginx     23 minutes ago   Up 11 minutes                     0.0.0.0:80->80/tcp, [::]:80->80/tcp
redis     redis:8.0.0    "docker-entrypoint.s…"   redis     23 minutes ago   Up 23 minutes                     0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp
  • 解决警告:删除 docker-compose.yaml 文件第一行的 version: “3.9”
[root@docker ~]# vim docker-compose.yaml 
[root@docker ~]# cat docker-compose.yaml 
services: nginx:image: nginx:1.27.5container_name: nginxports:- 80:80volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmlmysql:image: mysql:9.3.0container_name: mysqlrestart: alwaysports:- 3306:3306volumes:- /opt/mysql/data:/var/lib/mysql#- /opt/mysql/my.cnf:/etc/my.cnfcommand:- character-set-server=utf8mb4- collation-server=utf8mb4_general_ciredis:image: redis:8.0.0container_name: redisports:- 6379:6379volumes:- /opt/redis/data:/data
[root@docker ~]# docker-compose ps
NAME      IMAGE          COMMAND                  SERVICE   CREATED          STATUS                           PORTS
mysql     mysql:9.3.0    "docker-entrypoint.s…"   mysql     24 minutes ago   Restarting (127) 3 seconds ago   
nginx     nginx:1.27.5   "/docker-entrypoint.…"   nginx     26 minutes ago   Up 41 seconds                    0.0.0.0:80->80/tcp, [::]:80->80/tcp
redis     redis:8.0.0    "docker-entrypoint.s…"   redis     26 minutes ago   Up 26 minutes                    0.0.0.0:6379->6379/tcp, [::]:6379->6379/tcp
[root@docker ~]# mkdir composetest
[root@docker ~]# cd composetest/
[root@docker composetest]# ls
[root@docker composetest]# vim app.py
[root@docker composetest]# cat app.py 
import timeimport redis
from flask import Flaskapp = Flask(__name__)
cache = redis.Redis(host='redis', port=6379)def get_hit_count():retries = 5while True:try:return cache.incr('hits')except redis.exceptions.ConnectionError as exc:if retries == 0:raise excretries -= 1time.sleep(0.5)@app.route('/')
def hello():count = get_hit_count()return 'Hello World! I have been seen {} times.\n'.format(count)if __name__ == "__main__":app.run(host="0.0.0.0", debug=True)[root@docker composetest]# pip freeze > requirements.txt
[root@docker composetest]# cat requirements.txt 
flask
redis
[root@docker yum.repos.d]# mount /dev/sr0 /mnt
mount: /mnt: WARNING: source write-protected, mounted read-only.
[root@docker yum.repos.d]# dnf install pip -y
Updating Subscription Management repositories.
Unable to read consumer identityThis system is not registered with an entitlement server. You can use "rhc" or "subscription-manager" to register.BaseOS                                                                                     2.7 MB/s | 2.7 kB     00:00    
AppStream                                                                                  510 kB/s | 3.2 kB     00:00    
Docker CE Stable - x86_64                                                                  4.2 kB/s | 3.5 kB     00:00    
Docker CE Stable - x86_64                                                                   31  B/s |  55  B     00:01    
Errors during downloading metadata for repository 'docker-ce-stable':- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/65c4f66e2808d328890505c3c2f13bb35a96f457d1c21a6346191c4dc07e6080-updateinfo.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]- Curl error (35): SSL connect error for https://download.docker.com/linux/rhel/9/x86_64/stable/repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz [OpenSSL SSL_connect: Connection reset by peer in connection to download.docker.com:443 ]
Error: Failed to download metadata for repo 'docker-ce-stable': Yum repo downloading error: Downloading error(s): repodata/e0bf6fe4688b6f32c32f16998b1da1027a1ebfcec723b7c6c09e032effdb248a-primary.xml.gz - Cannot download, all mirrors were already tried without success; repodata/2fb592ad5a8fa5136a1d5992ce0a70f344c84f1601f0792d9573f8c5de73dffa-filelists.xml.gz - Cannot download, all mirrors were already tried without success
[root@docker yum.repos.d]# dnf install pip -y
Updating Subscription Management repositories.
Unable to read consumer identityThis system is not registered with an entitlement server. You can use "rhc" or "subscription-manager" to register.Docker CE Stable - x86_64                                                                  4.5 kB/s | 3.5 kB     00:00    
Docker CE Stable - x86_64                                                                   
......                                                                                     Complete!
[root@docker ~]# ls
anaconda-ks.cfg  composetest  docker-compose.yaml
[root@docker ~]# docker-compose  down
[+] Running 4/4✔ Container redis       Removed                                                                                      0.1s ✔ Container nginx       Removed                                                                                      0.1s ✔ Container mysql       Removed                                                                                      0.0s ✔ Network root_default  Removed                                                                                      0.1s 
[root@docker ~]# docker-compose ps
NAME      IMAGE     COMMAND   SERVICE   CREATED   STATUS    PORTS
[root@docker ~]# cd composetest/
[root@docker composetest]# ls
[root@docker composetest]# vim Dockerfile
[root@docker composetest]# vim docker-compose.yaml
[root@docker composetest]# cat Dockerfile
FROM python:3.12-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
[root@docker composetest]# cat docker-compose.yaml
services:web:build: .ports:- 5000:5000redis:image: redis:alpine# 复制会话,访问
[root@docker composetest]# curl http://192.168.98.149:5000
Hello World! I have been seen 1 times.
  • 执行命令启动,复制会话访问成功后 exit 退出
[root@docker composetest]# docker-compose up
Compose can now delegate builds to bake for better performance.To do so, set COMPOSE_BAKE=true.
[+] Building 50.1s (10/10) FINISHED                                                                         docker:default=> [web internal] load build definition from Dockerfile                                                              0.0s=> => transferring dockerfile: 208B                                                                                  0.0s=> [web internal] load metadata for docker.io/library/python:3.12-alpine                                             3.6s=> [web internal] load .dockerignore                                                                                 0.0s=> => transferring context: 2B                                                                                       0.0s=> [web internal] load build context                                                                                 0.0s=> => transferring context: 392B                                                                                     0.0s=> [web 1/4] FROM docker.io/library/python:3.12-alpine@sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b  6.9s=> => resolve docker.io/library/python:3.12-alpine@sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe  0.0s=> => sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad 252B / 252B                            5.2s=> => sha256:a664b68d141849d28fd0992b9aa88b4eab7a21d258e2e7ddb9d1b59fe07535a5 9.03kB / 9.03kB                        0.0s=> => sha256:b18b7c0ecd765d0608e439663196f2cfe4b572dccfffe1490530f7398df4b271 1.74kB / 1.74kB                        0.0s=> => sha256:81ce3817028c37c988e38333dd7283eab7662ce7dd43b5f019f86038c71f75ba 5.33kB / 5.33kB                        0.0s=> => sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03 460.20kB / 460.20kB                    2.2s=> => sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0 13.66MB / 13.66MB                      6.2s=> => extracting sha256:d39e98ccbc35a301f215a15d73fd647ad7044124c3835da293d82cbb765cab03                             0.1s=> => extracting sha256:a597717b83500ddd11c9b9eddc7094663b2a376c4c5cb0078b433e11a82e70d0                             0.6s=> => extracting sha256:d96ff845ea26c45b2499dfa5fb9ef5b354ba49ee6025d35c4ba81403072c10ad                             0.0s=> [web 2/4] ADD . /code                                                                                             0.1s=> [web 3/4] WORKDIR /code                                                                                           0.0s=> [web 4/4] RUN pip install -r requirements.txt                                                                    39.3s=> [web] exporting to image                                                                                          0.1s=> => exporting layers                                                                                               0.1s=> => writing image sha256:32303d47b57e0c674340d90a76bb82776024626e64d95e04e0ff5d7e893caa80                          0.0s => => naming to docker.io/library/composetest-web                                                                    0.0s => [web] resolving provenance for metadata file                                                                      0.0s 
[+] Running 4/4                                                                                                            ✔ web                            Built                                                                               0.0s ✔ Network composetest_default    Created                                                                             0.0s ✔ Container composetest-redis-1  Created                                                                             0.0s ✔ Container composetest-web-1    Created                                                                             0.0s 
Attaching to redis-1, web-1
redis-1  | Starting Redis Server
redis-1  | 1:C 10 May 2025 08:28:20.677 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis-1  | 1:C 10 May 2025 08:28:20.680 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis-1  | 1:C 10 May 2025 08:28:20.680 * Redis version=8.0.0, bits=64, commit=00000000, modified=1, pid=1, just started
redis-1  | 1:C 10 May 2025 08:28:20.680 * Configuration loaded
redis-1  | 1:M 10 May 2025 08:28:20.683 * monotonic clock: POSIX clock_gettime
redis-1  | 1:M 10 May 2025 08:28:20.693 * Running mode=standalone, port=6379.
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> RedisBloom version 7.99.90 (Git=unknown)
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> Registering configuration options: [
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ bf-error-rate       :      0.01 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ bf-initial-size     :       100 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ bf-expansion-factor :         2 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-bucket-size      :         2 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-initial-size     :      1024 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-max-iterations   :        20 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-expansion-factor :         1 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> 	{ cf-max-expansions   :        32 }
redis-1  | 1:M 10 May 2025 08:28:20.709 * <bf> ]
redis-1  | 1:M 10 May 2025 08:28:20.709 * Module 'bf' loaded from /usr/local/lib/redis/modules//redisbloom.so
redis-1  | 1:M 10 May 2025 08:28:20.799 * <search> Redis version found by RedisSearch : 8.0.0 - oss
redis-1  | 1:M 10 May 2025 08:28:20.800 * <search> RediSearch version 8.0.0 (Git=HEAD-61787b7)
redis-1  | 1:M 10 May 2025 08:28:20.803 * <search> Low level api version 1 initialized successfully
redis-1  | 1:M 10 May 2025 08:28:20.808 * <search> gc: ON, prefix min length: 2, min word length to stem: 4, prefix max expansions: 200, query timeout (ms): 500, timeout policy: fail, cursor read size: 1000, cursor max idle (ms): 300000, max doctable size: 1000000, max number of search results:  1000000, 
redis-1  | 1:M 10 May 2025 08:28:20.810 * <search> Initialized thread pools!
redis-1  | 1:M 10 May 2025 08:28:20.810 * <search> Disabled workers threadpool of size 0
redis-1  | 1:M 10 May 2025 08:28:20.815 * <search> Subscribe to config changes
redis-1  | 1:M 10 May 2025 08:28:20.815 * <search> Enabled role change notification
redis-1  | 1:M 10 May 2025 08:28:20.815 * <search> Cluster configuration: AUTO partitions, type: 0, coordinator timeout: 0ms
redis-1  | 1:M 10 May 2025 08:28:20.816 * <search> Register write commands
redis-1  | 1:M 10 May 2025 08:28:20.816 * Module 'search' loaded from /usr/local/lib/redis/modules//redisearch.so
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> RedisTimeSeries version 79991, git_sha=de1ad5089c15c42355806bbf51a0d0cf36f223f6
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> Redis version found by RedisTimeSeries : 8.0.0 - oss
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> Registering configuration options: [
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-compaction-policy   :              }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-num-threads         :            3 }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-retention-policy    :            0 }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-duplicate-policy    :        block }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-chunk-size-bytes    :         4096 }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-encoding            :   compressed }
redis-1  | 1:M 10 May 2025 08:28:20.828 * <timeseries> 	{ ts-ignore-max-time-diff:            0 }
redis-1  | 1:M 10 May 2025 08:28:20.829 * <timeseries> 	{ ts-ignore-max-val-diff :     0.000000 }
redis-1  | 1:M 10 May 2025 08:28:20.829 * <timeseries> ]
redis-1  | 1:M 10 May 2025 08:28:20.833 * <timeseries> Detected redis oss
redis-1  | 1:M 10 May 2025 08:28:20.837 * Module 'timeseries' loaded from /usr/local/lib/redis/modules//redistimeseries.so
redis-1  | 1:M 10 May 2025 08:28:20.886 * <ReJSON> Created new data type 'ReJSON-RL'
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> version: 79990 git sha: unknown branch: unknown
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V1 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V2 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V3 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V4 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Exported RedisJSON_V5 API
redis-1  | 1:M 10 May 2025 08:28:20.900 * <ReJSON> Enabled diskless replication
redis-1  | 1:M 10 May 2025 08:28:20.904 * <ReJSON> Initialized shared string cache, thread safe: false.
redis-1  | 1:M 10 May 2025 08:28:20.904 * Module 'ReJSON' loaded from /usr/local/lib/redis/modules//rejson.so
redis-1  | 1:M 10 May 2025 08:28:20.904 * <search> Acquired RedisJSON_V5 API
redis-1  | 1:M 10 May 2025 08:28:20.907 * Server initialized
redis-1  | 1:M 10 May 2025 08:28:20.910 * Ready to accept connections tcp
web-1    |  * Serving Flask app 'app'
web-1    |  * Debug mode: on
web-1    | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
web-1    |  * Running on all addresses (0.0.0.0)
web-1    |  * Running on http://127.0.0.1:5000
web-1    |  * Running on http://172.18.0.3:5000
web-1    | Press CTRL+C to quit
web-1    |  * Restarting with stat
web-1    |  * Debugger is active!
web-1    |  * Debugger PIN: 102-900-685
web-1    | 192.168.98.149 - - [10/May/2025 08:31:06] "GET / HTTP/1.1" 200 -w Enable Watch

在这里插入图片描述
在这里插入图片描述

failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 1[root@docker composetest]# 
[root@docker composetest]# pip install --upgrade pip
Requirement already satisfied: pip in /usr/lib/python3.9/site-packages (21.3.1)

与 Swarm 一起使用 Compose

使用 docker service create 一次只能部署一个服务,使用 docker-compose.yml 我们可以一次启动多个关联的服务。

[root@docker ~]# ls
anaconda-ks.cfg  composetest  docker-compose.yaml
[root@docker ~]# mkdir dcswarm
[root@docker ~]# ls
anaconda-ks.cfg  composetest  dcswarm  docker-compose.yaml[root@docker ~]# cd dcswarm/
[root@docker dcswarm]# vim docker-compose.yaml
[root@docker dcswarm]# cat docker-compose.yaml
services:nginx:image: nginx:1.27.5ports:- 80:80- 443:443volumes:- /opt/nginx/conf:/etc/nginx/conf.d- /opt/nginx/html:/usr/share/nginx/htmldeploy:mode: replicatedreplicas: 2mysql:image: mysql:9.3.0ports:- 3306:3306command:- character-set-server=utf8mb4- collation-server=utf8mb4_general_cienvironment:MYSQL_ROOT_PASSWORD: "123456"volumes:- /opt/mysql/data:/var/lib/mysqldeploy:mode: replicatedreplicas: 2[root@docker dcswarm]# docker stack deploy -c docker-compose.yaml webdocker swarm initdocker swarm join --token 
[root@docker ~]# scp dcswarm/* root@192.168.98.47:~
root@192.168.98.47's password: 
docker-compose.yaml     [root@manager ~]# ls
anaconda-ks.cfg  docker-compose.yaml  nginx_1.27.4.tar.gz  swarm_1.2.9.tar.gz
[root@manager ~]# mkdir dcswarm
[root@manager ~]# mv docker-compose.yaml dcswarm/
[root@manager ~]# cd dcswarm/
[root@manager dcswarm]# ls
docker-compose.yaml
[root@manager dcswarm]# docker stack deploy -c docker-compose.yaml web
Since --detach=false was not specified, tasks will be created in the background.
In a future release, --detach=false will become the default.
Creating network web_default
Creating service web_mysql
Creating service web_nginx

三、配置私有仓库(Harbor)

3.1 环境准备

Harbor(港湾),是一个用于 存储分发 Docker 镜像的企业级 Registry 服务器。

  1. 修改主机名、IP地址
  2. 开启路由转发/etc/sysctl.conf
    net.ipv4.ip_forward = 1
  3. 配置主机映射/etc/hosts
# 修改主机名、IP地址
[root@docker ~]# hostnamectl hostname harbor
[root@docker ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.132/24 brd 192.168.86.255 scope global dynamic noprefixroute ens160valid_lft 1672sec preferred_lft 1672secinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.159/24 brd 192.168.98.255 scope global dynamic noprefixroute ens224valid_lft 1672sec preferred_lft 1672secinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever
[root@docker ~]# nmcli c show 
NAME                UUID                                  TYPE      DEVICE  
Wired connection 1  110c742f-bd12-3ba3-b671-1972a75aa2e6  ethernet  ens224  
ens160              d622d6da-1540-371d-8def-acd3db9bd38d  ethernet  ens160  
lo                  d20cef01-6249-4012-908c-f775efe44118  loopback  lo      
docker0             b023990a-e131-4a68-828c-710158f77a50  bridge    docker0 
[root@docker ~]# nmcli c m "Wired connection 1" connection.id ens224
[root@docker ~]# nmcli c show 
NAME     UUID                                  TYPE      DEVICE  
ens224   110c742f-bd12-3ba3-b671-1972a75aa2e6  ethernet  ens224  
ens160   d622d6da-1540-371d-8def-acd3db9bd38d  ethernet  ens160  
lo       d20cef01-6249-4012-908c-f775efe44118  loopback  lo      
docker0  b023990a-e131-4a68-828c-710158f77a50  bridge    docker0 
[root@docker ~]# nmcli c m ens224 ipv4.method manual ipv4.addresses 192.168.98.20/24 ipv4.gateway 192.168.98.2 ipv4.dns 223.5.5.5 connection.autoconnect yes
[root@docker ~]# nmcli c up ens224 
[root@harbor ~]# nmcli c m ens160 ipv4.method manual ipv4.addresses 192.168.86.20/24 ipv4.gateway 192.168.86.200 ipv4.dns "223.5.5.5 8.8.8.8" connection.autoconnect yes
[root@harbor ~]# nmcli c up ens160 
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@harbor ~]# ip ad
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:24 brd ff:ff:ff:ff:ff:ffaltname enp3s0inet 192.168.86.20/24 brd 192.168.86.255 scope global noprefixroute ens160valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fef5:e524/64 scope link noprefixroute valid_lft forever preferred_lft forever
3: ens224: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000link/ether 00:0c:29:f5:e5:2e brd ff:ff:ff:ff:ff:ffaltname enp19s0inet 192.168.98.20/24 brd 192.168.98.255 scope global noprefixroute ens224valid_lft forever preferred_lft foreverinet6 fe80::fd7d:606d:1a1b:d3cc/64 scope link noprefixroute valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether ae:a7:3e:3a:39:bb brd ff:ff:ff:ff:ff:ffinet 172.17.0.1/16 brd 172.17.255.255 scope global docker0valid_lft forever preferred_lft forever# 开启路由转发
[root@harbor ~]# echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf 
[root@harbor ~]# sysctl -p
net.ipv4.ip_forward = 1
# 配置主机映射
[root@harbor ~]# vim /etc/hosts
[root@harbor ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.86.11 k8s-master01 m1
192.168.86.12 k8s-node01 n1
192.168.86.13 k8s-node02 n2
192.168.86.20 harbor.registry.com harbor

3.2 安装 Docker

  1. 添加 Docker 源
  2. 安装 Docker
  3. 配置 Docker
  4. 启动 Docker
  5. 验证 Docker
[root@harbor ~]# vim /etc/docker/daemon.json
[root@harbor ~]# cat /etc/docker/daemon.json
{"default-ipc-mode": "shareable",	#ipc模式打开"data-root": "/data/docker",		#指定docker数据放在哪个目录"exec-opts": ["native.cgroupdriver=systemd"],	#指定cgroup的驱动方式是systemd"log-driver": "json-file",	#格式json"log-opts": {"max-size": "100m","max-file": "50"},"insecure-registries": ["https://harbor.registry.com"],	#自己的仓库的地址(私有仓库)"registry-mirrors":[	#拉取镜像(公共仓库)"https://docker.m.daocloud.io","https://docker.imgdb.de","https://docker-0.unsee.tech","https://docker.hlmirror.com","https://docker.1ms.run","https://func.ink","https://lispy.org","https://docker.xiaogenban1993.com"]
}
[root@harbor ~]# mkdir -p /data/docker -p
[root@harbor ~]# systemctl restart docker
[root@harbor ~]# ls /data/docker/
buildkit  containers  engine-id  image  network  overlay2  plugins  runtimes  swarm  tmp  volumes

3.3 安装 docker-compose

  1. 下载
  2. 安装
  3. 赋权
  4. 验证
[root@harbor ~]# mv docker-compose-linux-x86_64 /usr/bin/docker-compose
[root@harbor ~]# chmod +x /usr/bin/do
docker                         dockerd                        dockerd-rootless.sh            domainname
docker-compose                 dockerd-rootless-setuptool.sh  docker-proxy                   
[root@harbor ~]# chmod +x /usr/bin/docker-compose 
[root@harbor ~]# docker-compose --version
Docker Compose version v2.35.1
[root@harbor ~]# cd /data/
[root@harbor data]# mv /root/harbor-offline-installer-v2.13.0.tgz .
[root@harbor data]# ls
docker  harbor-offline-installer-v2.13.0.tgz
[root@harbor data]# tar -xzf harbor-offline-installer-v2.13.0.tgz 
[root@harbor data]# ls
docker  harbor  harbor-offline-installer-v2.13.0.tgz
[root@harbor data]# rm -f *.tgz
[root@harbor data]# ls
docker  harbor
[root@harbor data]# cd harbor/
[root@harbor harbor]# ls
common.sh  harbor.v2.13.0.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare

3.4 准备 Harbor

  1. 下载 Harbor
  2. 解压文件

3.5 配置证书

  1. 生成 CA 证书
  2. 生成服务器证书
  3. 向 Harbor 和 Docker 提供证书

3.6 部署配置 Harbor

  1. 配置 Harbor
  2. 加载 harbor 镜像
  3. 检查安装环境
  4. 启动 Harbor
  5. 查看启动的容器

3.7 配置启动服务

  1. 停止 Harbor
  2. 编写服务文件
  3. 启动 Harbor 服务

3.8 定制本地仓库

  1. 配置映射
  2. 配置仓库

3.9 测试本地仓库

  1. 拉取镜像
  2. 镜像打标签
  3. 登录仓库
  4. 推送镜像
  5. 拉取镜像
  • 配置证书
    https://goharbor.io/docs/2.13.0/install-config/installation-prereqs/
[root@harbor harbor]# mkdir ssl
[root@harbor harbor]# cd ssl
[root@harbor ssl]# openssl genrsa -out ca.key 4096
[root@harbor ssl]# ls
ca.key
[root@harbor ssl]# openssl req -x509 -new -nodes -sha512 -days 3650 \-subj "/C=CN/ST=Chongqing/L=Banan/O=example/OU=Personal/CN=MyPersonal Root CA" \-key ca.key \-out ca.crt
[root@harbor ssl]# ls
ca.crt  ca.key[root@harbor ssl]# openssl genrsa -out harbor.registry.com.key 4096
[root@harbor ssl]# ls
ca.crt  ca.key  harbor.registry.com.key
[root@harbor ssl]# openssl req -sha512 -new \-subj "/C=CN/ST=Chongqing/L=Banan/O=example/OU=Personal/CN=harbor.registry.com" \-key harbor.registry.com.key \-out harbor.registry.com.csr
[root@harbor ssl]# ls
ca.crt  ca.key  harbor.registry.com.csr  harbor.registry.com.key[root@harbor ssl]# cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names[alt_names]
DNS.1=harbor.registry.com
DNS.2=harbor.registry
DNS.3=harbor
EOF
[root@harbor ssl]# ls
ca.crt  ca.key  harbor.registry.com.csr  harbor.registry.com.key  v3.ext[root@harbor ssl]# openssl x509 -req -sha512 -days 3650 \-extfile v3.ext \-CA ca.crt -CAkey ca.key -CAcreateserial \-in harbor.registry.com.csr \-out harbor.registry.com.crt
Certificate request self-signature ok
subject=C=CN, ST=Chongqing, L=Banan, O=example, OU=Personal, CN=harbor.registry.com
[root@harbor ssl]# ls
ca.crt  ca.key  ca.srl  harbor.registry.com.crt  harbor.registry.com.csr  harbor.registry.com.key  v3.ext
[root@harbor ssl]# mkdir /data/cert
[root@harbor ssl]# cp harbor.registry.com.crt /data/cert/
cp harbor.registry.com.key /data/cert/
[root@harbor ssl]# ls /data/cert/
harbor.registry.com.crt  harbor.registry.com.key
[root@harbor ssl]# openssl x509 -inform PEM -in harbor.registry.com.crt -out harbor.registry.com.cert
[root@harbor ssl]# ls
ca.crt  ca.srl                    harbor.registry.com.crt  harbor.registry.com.key
ca.key  harbor.registry.com.cert  harbor.registry.com.csr  v3.ext[root@harbor ssl]# mkdir -p /etc/docker/certs.d/harbor.registry.com:443
[root@harbor ssl]# cp harbor.registry.com.cert /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# cp harbor.registry.com.key /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# cp ca.crt /etc/docker/certs.d/harbor.registry.com:443/
[root@harbor ssl]# systemctl restart docker
[root@harbor ssl]# systemctl status docker
● docker.service - Docker Application Container EngineLoaded: loaded (/usr/lib/systemd/system/docker.service; enabled; preset: disabled)Active: active (running) since Sun 2025-05-11 10:42:37 CST; 1min 25s ago
TriggeredBy: ● docker.socketDocs: https://docs.docker.comMain PID: 3322 (dockerd)Tasks: 10Memory: 29.7MCPU: 353msCGroup: /system.slice/docker.service└─3322 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockMay 11 10:42:36 harbor dockerd[3322]: time="2025-05-11T10:42:36.466441227+08:00" level=info msg="Creating a contai>
[root@harbor ssl]# cd ..
[root@harbor harbor]# ls
common.sh  harbor.v2.13.0.tar.gz  harbor.yml.tmpl  install.sh  LICENSE  prepare  ssl
[root@harbor harbor]# cp harbor.yml.tmpl harbor.yml
[root@harbor harbor]# vim harbor.yml
[root@harbor harbor]# cat harbor.yml
# Configuration file of Harbor# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.registry.com  #修改# http related config
http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 80# https related config
https:# https port for harbor, default is 443port: 443# The path of cert and key files for nginxcertificate: /data/cert/harbor.registry.com.crt  #修改private_key: /data/cert/harbor.registry.com.key  #修改
...............
[root@harbor harbor]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@harbor harbor]# docker load -i harbor.v2.13.0.tar.gz 
874b37071853: Loading layer [==================================================>]  
........
832349ff3d50: Loading layer [==================================================>]  38.95MB/38.95MB
Loaded image: goharbor/harbor-exporter:v2.13.0
[root@harbor harbor]# docker images
REPOSITORY                      TAG       IMAGE ID       CREATED       SIZE
goharbor/harbor-exporter        v2.13.0   0be56feff492   4 weeks ago   127MB
goharbor/redis-photon           v2.13.0   7c0d9781ab12   4 weeks ago   166MB
goharbor/trivy-adapter-photon   v2.13.0   f2b4d5497558   4 weeks ago   381MB
goharbor/harbor-registryctl     v2.13.0   bbd957df71d6   4 weeks ago   162MB
goharbor/registry-photon        v2.13.0   fa23989bf194   4 weeks ago   85.9MB
goharbor/nginx-photon           v2.13.0   c922d86a7218   4 weeks ago   151MB
goharbor/harbor-log             v2.13.0   463b8f469e21   4 weeks ago   164MB
goharbor/harbor-jobservice      v2.13.0   112a1616822d   4 weeks ago   174MB
goharbor/harbor-core            v2.13.0   b90fcb27fd54   4 weeks ago   197MB
goharbor/harbor-portal          v2.13.0   858f92a0f5f9   4 weeks ago   159MB
goharbor/harbor-db              v2.13.0   13a2b78e8616   4 weeks ago   273MB
goharbor/prepare                v2.13.0   2380b5a4f127   4 weeks ago   205MB
[root@harbor harbor]# ls
common.sh  harbor.v2.13.0.tar.gz  harbor.yml  harbor.yml.tmpl  install.sh  LICENSE  prepare  ssl
[root@harbor harbor]# ./prepare
prepare base dir is set to /data/harbor
Generated configuration file: /config/portal/nginx.conf
Generated configuration file: /config/log/logrotate.conf
Generated configuration file: /config/log/rsyslog_docker.conf
Generated configuration file: /config/nginx/nginx.conf
Generated configuration file: /config/core/env
Generated configuration file: /config/core/app.conf
Generated configuration file: /config/registry/config.yml
Generated configuration file: /config/registryctl/env
Generated configuration file: /config/registryctl/config.yml
Generated configuration file: /config/db/env
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ...
ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist
copy  to shared trust ca dir as name storage_ca_bundle.crt ...
copy None to shared trust ca dir as name redis_tls_ca.crt ...
Generated and saved secret to file: /data/secret/keys/secretkey
Successfully called func: create_root_cert
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dir# 启动harbor
[root@harbor harbor]# ./install.sh[Step 0]: checking if docker is installed ...Note: docker version: 28.0.4[Step 1]: checking docker-compose is installed ...Note: Docker Compose version v2.34.0[Step 2]: loading Harbor images ...
Loaded image: goharbor/harbor-db:v2.13.0
Loaded image: goharbor/harbor-jobservice:v2.13.0
Loaded image: goharbor/harbor-registryctl:v2.13.0
Loaded image: goharbor/redis-photon:v2.13.0
Loaded image: goharbor/trivy-adapter-photon:v2.13.0
Loaded image: goharbor/nginx-photon:v2.13.0
Loaded image: goharbor/registry-photon:v2.13.0
Loaded image: goharbor/prepare:v2.13.0
Loaded image: goharbor/harbor-portal:v2.13.0
Loaded image: goharbor/harbor-core:v2.13.0
Loaded image: goharbor/harbor-log:v2.13.0
Loaded image: goharbor/harbor-exporter:v2.13.0[Step 3]: preparing environment ...[Step 4]: preparing harbor configs ...
prepare base dir is set to /data/harbor
Clearing the configuration file: /config/portal/nginx.conf
.......
Generated configuration file: /config/portal/nginx.conf
.......
Generated configuration file: /config/jobservice/env
Generated configuration file: /config/jobservice/config.yml
copy /data/secret/tls/harbor_internal_ca.crt to shared trust ca dir as name harbor_internal_ca.crt ...
ca file /hostfs/data/secret/tls/harbor_internal_ca.crt is not exist
copy  to shared trust ca dir as name storage_ca_bundle.crt ...
copy None to shared trust ca dir as name redis_tls_ca.crt ...
loaded secret from file: /data/secret/keys/secretkey
Generated configuration file: /compose_location/docker-compose.yml
Clean up the input dirNote: stopping existing Harbor instance ...[Step 5]: starting Harbor ...
[+] Running 10/10		#10个容器成功运行✔ Network harbor_harbor        Created                                                                       0.0s 
.........                                                                    1.3s 
✔ ----Harbor has been installed and started successfully.----
  • 配置启动服务
# 会删容器,但是不会删除镜像
[root@harbor harbor]# docker-compose down
[+] Running 10/10✔ Container harbor-jobservice  Removed                                                                       0.1s ✔ Container registryctl        Removed                                                                       0.1s ✔ Container nginx              Removed                                                                       0.1s ✔ Container harbor-portal      Removed                                                                       0.1s ✔ Container harbor-core        Removed                                                                       0.1s ✔ Container registry           Removed                                                                       0.1s ✔ Container redis              Removed                                                                       0.1s ✔ Container harbor-db          Removed                                                                       0.2s ✔ Container harbor-log         Removed                                                                      10.1s ✔ Network harbor_harbor        Removed                                                                       0.1s 
[root@harbor harbor]# docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@harbor harbor]# docker images
REPOSITORY                      TAG       IMAGE ID       CREATED       SIZE
goharbor/harbor-exporter        v2.13.0   0be56feff492   4 weeks ago   127MB
goharbor/redis-photon           v2.13.0   7c0d9781ab12   4 weeks ago   166MB
goharbor/trivy-adapter-photon   v2.13.0   f2b4d5497558   4 weeks ago   381MB
goharbor/harbor-registryctl     v2.13.0   bbd957df71d6   4 weeks ago   162MB
goharbor/registry-photon        v2.13.0   fa23989bf194   4 weeks ago   85.9MB
goharbor/nginx-photon           v2.13.0   c922d86a7218   4 weeks ago   151MB
goharbor/harbor-log             v2.13.0   463b8f469e21   4 weeks ago   164MB
goharbor/harbor-jobservice      v2.13.0   112a1616822d   4 weeks ago   174MB
goharbor/harbor-core            v2.13.0   b90fcb27fd54   4 weeks ago   197MB
goharbor/harbor-portal          v2.13.0   858f92a0f5f9   4 weeks ago   159MB
goharbor/harbor-db              v2.13.0   13a2b78e8616   4 weeks ago   273MB
goharbor/prepare                v2.13.0   2380b5a4f127   4 weeks ago   205MB# 编写服务文件(必须有这三个板块)
[root@harbor harbor]# vim /usr/lib/systemd/system/harbor.service
[root@harbor harbor]# cat /usr/lib/systemd/system/harbor.service
[Unit]	#定义服务启动的依赖关系和顺序、服务的描述信息
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service	# 定义顺序:...之后,不是强依赖
Requires=docker.service		#必须先启动这个服务(强依赖)
Documentation=http://github.com/vmware/harbor[Service]	#定义服务的启动、停止、重启
Type=simple	#服务启动的进程启动方式
Restart=on-failure	#如果失败了就重启
RestartSec=5		#重启时间
ExecStart=/usr/bin/docker-compose --file /data/harbor/docker-compose.yml up	#启动时需要执行的指令
# /usr/bin/docker-compose为刚刚安装的路径
ExecStop=/usr/bin/docker-compose --file /data/harbor/docker-compose.yml down[Install]
WantedBy=multi-user.target
  • /usr/bin/docker-compose为刚刚安装的路径
[root@harbor ~]# cd /data/harbor/
[root@harbor harbor]# ls
common     docker-compose.yml     harbor.yml       install.sh  prepare
common.sh  harbor.v2.13.0.tar.gz  harbor.yml.tmpl  LICENSE     ssl
[root@harbor harbor]# cat docker-compose.yml 
services:log:image: goharbor/harbor-log:v2.13.0container_name: harbor-logrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /var/log/harbor/:/var/log/docker/:z- type: bindsource: ./common/config/log/logrotate.conftarget: /etc/logrotate.d/logrotate.conf- type: bindsource: ./common/config/log/rsyslog_docker.conftarget: /etc/rsyslog.d/rsyslog_docker.confports:- 127.0.0.1:1514:10514networks:- harborregistry:image: goharbor/registry-photon:v2.13.0container_name: registryrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: /data/secret/registry/root.crttarget: /etc/registry/root.crt- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "registry"registryctl:image: goharbor/harbor-registryctl:v2.13.0container_name: registryctlenv_file:- ./common/config/registryctl/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/registry:/storage:z- ./common/config/registry/:/etc/registry/:z- type: bindsource: ./common/config/registryctl/config.ymltarget: /etc/registryctl/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "registryctl"postgresql:image: goharbor/harbor-db:v2.13.0container_name: harbor-dbrestart: alwayscap_drop:- ALLcap_add:- CHOWN- DAC_OVERRIDE- SETGID- SETUIDvolumes:- /data/database:/var/lib/postgresql/data:znetworks:harbor:env_file:- ./common/config/db/envdepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "postgresql"shm_size: '1gb'core:image: goharbor/harbor-core:v2.13.0container_name: harbor-coreenv_file:- ./common/config/core/envrestart: alwayscap_drop:- ALLcap_add:- SETGID- SETUIDvolumes:- /data/ca_download/:/etc/core/ca/:z- /data/:/data/:z- ./common/config/core/certificates/:/etc/core/certificates/:z- type: bindsource: ./common/config/core/app.conftarget: /etc/core/app.conf- type: bindsource: /data/secret/core/private_key.pemtarget: /etc/core/private_key.pem- type: bindsource: /data/secret/keys/secretkeytarget: /etc/core/key- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:harbor:depends_on:- log- registry- redis- postgresqllogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "core"portal:image: goharbor/harbor-portal:v2.13.0container_name: harbor-portalrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- type: bindsource: ./common/config/portal/nginx.conftarget: /etc/nginx/nginx.confnetworks:- harbordepends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "portal"jobservice:image: goharbor/harbor-jobservice:v2.13.0container_name: harbor-jobserviceenv_file:- ./common/config/jobservice/envrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/job_logs:/var/log/jobs:z- type: bindsource: ./common/config/jobservice/config.ymltarget: /etc/jobservice/config.yml- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harbordepends_on:- corelogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "jobservice"redis:image: goharbor/redis-photon:v2.13.0container_name: redisrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUIDvolumes:- /data/redis:/var/lib/redisnetworks:harbor:depends_on:- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "redis"proxy:image: goharbor/nginx-photon:v2.13.0container_name: nginxrestart: alwayscap_drop:- ALLcap_add:- CHOWN- SETGID- SETUID- NET_BIND_SERVICEvolumes:- ./common/config/nginx:/etc/nginx:z- /data/secret/cert:/etc/cert:z- type: bindsource: ./common/config/shared/trust-certificatestarget: /harbor_cust_certnetworks:- harborports:- 80:8080- 443:8443depends_on:- registry- core- portal- loglogging:driver: "syslog"options:syslog-address: "tcp://localhost:1514"tag: "proxy"
networks:harbor:external: false








相关文章:

Docker 运维管理

Docker 运维管理 一、Swarm集群管理1.1 Swarm的核心概念1.1.1 集群1.1.2 节点1.1.3 服务和任务1.1.4 负载均衡 1.2 Swarm安装准备工作创建集群添加工作节点到集群发布服务到集群扩展一个或多个服务从集群中删除服务ssh免密登录 二、Docker Compose与 Swarm 一起使用 Compose 三…...

五分钟本地部署大模型

前提&#xff1a;个人PC机&#xff0c;配置&#xff1a;CPU:i5-13600KF 显卡&#xff1a;RTX3080 内存&#xff1a;32GB 1.安装ollama 访问https://ollama.com/&#xff0c;点击下载&#xff0c;完成后傻瓜式安装即可&#xff1b; 2.修改环境变量 默认大模型下载在C盘&…...

RSA(公钥加密算法)

RSA&#xff08;Rivest-Shamir-Adleman&#xff09;是一种常见的公钥加密算法&#xff0c;广泛应用于安全通信中。它是由三位计算机科学家Ron Rivest、Adi Shamir和Leonard Adleman于1977年提出的&#xff0c;是一种基于数论问题的加密算法。 一、RSA的基本原理 RSA是基于大数…...

Go语言测试用例的执行与分析

在软件开发过程中&#xff0c;测试用例是确保代码质量的关键环节。Go语言作为一种现代的编程语言&#xff0c;它内置了强大的测试框架&#xff0c;可以帮助开发者轻松编写和执行测试用例。本文将介绍如何在 Go 语言中编写、执行测试用例&#xff0c;并对测试结果进行分析。 ## …...

动态规划-LCR 089.打家劫舍-力扣(LeetCode)

一、题目解析 结合示例1&#xff0c;我们能得知对于小偷而言不能连续偷相连的房间&#xff0c;且需要保证偷窃的金额最高。 二、算法解析 1.状态表示 我们想知道到最后一个房子时所偷窃的最高金额&#xff0c;所以dp[i]表示在i位置时&#xff0c;所偷到的最大价值。 但我们…...

leetcode hot100:解题思路大全

因为某大厂的算法没有撕出来&#xff0c;怒而整理该贴。只有少数题目有AC代码&#xff0c;大部分只会有思路或者伪代码。 技巧 只出现一次的数字 题目 给你一个 非空 整数数组 nums &#xff0c;除了某个元素只出现一次以外&#xff0c;其余每个元素均出现两次。找出那个只出…...

2022年下半年信息系统项目管理师——综合知识真题及答案(4)

2022年下半年信息系统项目管理师 ——综合知识真题及答案&#xff08;4&#xff09; 零、时光宝盒 &#xff08;https://blog.csdn.net/weixin_69553582 逆境清醒&#xff09; 双向奔赴的善意 网上看到的视频。 家里开包子店的男孩冒雨放学&#xff0c;路口的交警叔叔担心孩…...

大语言模型(LLM)本身是无状态的,怎么固化记忆

大语言模型(LLM)本身是无状态的,无法直接“记住”历史对话或用户特定信息 大语言模型(LLM)本身是无状态的,无法直接“记住”历史对话或用户特定信息,但可以通过架构改进、外部记忆整合、训练方法优化等方案实现上下文记忆能力。 一、模型内部记忆增强:让LLM“记住”…...

ISO 26262-5 硬件详细设计

7 Hardware detailed design 硬件详细设计个人理解包含各种理论计算和分析 为了避免常见的设计缺陷&#xff0c; 应运用相关的经验总结。 在硬件详细设计时&#xff0c; 应考虑安全相关硬件元器件失效的非功能性原因&#xff0c; 如果适用&#xff0c; 可包括以下的影响因素&…...

C# NX二次开发-求体、面的最小包容圆柱

NX自带ufun函数里有求体、面的最小包容方块。(UF_MODL_ask_bounding_box、UF_MODL_ask_bounding_box_aligned、UF_MODL_ask_bounding_box_aligned),但没有求最小包容圆柱。但有很多时候需要求最小包容圆柱。比如零件开圆棒料。这时需要通过一些方法来计算出最小包容圆柱。 …...

vue2.0 组件之间的数据共享

个人简介 &#x1f468;‍&#x1f4bb;‍个人主页&#xff1a; 魔术师 &#x1f4d6;学习方向&#xff1a; 主攻前端方向&#xff0c;正逐渐往全栈发展 &#x1f6b4;个人状态&#xff1a; 研发工程师&#xff0c;现效力于政务服务网事业 &#x1f1e8;&#x1f1f3;人生格言&…...

11.4/Q1,GBD数据库最新文章解读

文章题目&#xff1a;Global, regional, and national burden of neglected tropical diseases and malaria in the general population, 1990-2021: Systematic analysis of the global burden of disease study 2021 DOI&#xff1a;10.1016/j.jare.2025.04.004 中文标题&…...

【愚公系列】《Manus极简入门》048-自然探险之旅:“户外活动规划师”

&#x1f31f;【技术大咖愚公搬代码&#xff1a;全栈专家的成长之路&#xff0c;你关注的宝藏博主在这里&#xff01;】&#x1f31f; &#x1f4e3;开发者圈持续输出高质量干货的"愚公精神"践行者——全网百万开发者都在追更的顶级技术博主&#xff01; &#x1f…...

生命科学温控物流:现状、驱动因素与发展趋势深度洞察

在生命科学产业蓬勃发展的当下&#xff0c;生命科学温控物流作为保障药品、疫苗等温度敏感产品安全运输的关键环节&#xff0c;正受到越来越多的关注。根据QYResearch报告出版商调研统计&#xff0c;2031年全球生命科学温控物流市场销售额预计将达到3563.3亿元&#xff0c;年复…...

2025-2030年制造业数字化转型发展趋势展望

随着科技的飞速发展&#xff0c;数字化转型已成为制造业提升竞争力、实现高质量发展的核心路径。从2025年到2030年&#xff0c;这一趋势将进一步深化&#xff0c;新技术、新模式和新生态将为制造业注入强劲动力。作为小编&#xff0c;今天带大家一起来看看未来五年制造业数字化…...

OSD原理以及模块的讲解

一.原理讲解 1.OSD的概念&#xff1a; OSD(on-screen-display)中文名称是屏幕菜单调节显示方式&#xff0c;它的作用是对屏幕显示器做各种工作指标&#xff0c;包括&#xff1a;色彩、几何图形等进行调整&#xff0c;从而使得整个显示器得到最佳的状。 最常见的OSD调试就是在…...

SQL注入——Sqlmap工具使用

一、Sqlmap介绍 Sqlmap 是一个使用python语言开发的开源的渗透测试工具&#xff0c;可以用来进行自动化检测&#xff0c;利用 SQL 注入漏洞&#xff0c;获取数据库服务器的权限。它具有功能强大的检测引擎&#xff0c;针对各种不同类型数据库的渗透测试的功能选项&#xff0c;…...

如何有效提高海外社媒矩阵曝光率,避免封号风险?

在全球社交媒体营销的过程中&#xff0c;海外矩阵社媒的运营已经成为一个不可或缺的策略。通过建立多个社媒账号&#xff0c;可以有效地扩展市场覆盖、提高品牌曝光率&#xff0c;但与此同时&#xff0c;账号之间的关联问题也需要引起足够重视。过度的关联可能导致社媒平台对账…...

B树与B+树全面解析

B树与B树全面解析 前言一、B 树的基本概念与结构特性1.1 B 树的定义1.2 B 树的结构特性1.3 B 树的节点结构示例 二、B 树的基本操作2.1 查找操作2.2 插入操作2.3 删除操作 三、B 树的基本概念与结构特性3.1 B 树的定义3.2 B 树的结构特性3.3 B 树的节点结构示例 四、B 树与…...

代码随想录60期day41

完全背包 #include<iostream> #include<vector>int main() {int n,bagWeight;int w,v;cin>>n>>bagWeight;vector<int>weight(n);vector<int>value(n);for(int i 0;i <n;i){cin>>weight[i]>>value[i];}vector<vecotr&l…...

语言幻觉测试用例及相关策略总结

文章目录 语言幻觉测试用例及相关策略总结如何判断内容是否存在语言幻觉&#xff1f; 一、语言幻觉测试用例类型1.1 事实性错误测试用例 1&#xff1a;时效性强的事实用例 2&#xff1a;跨领域常识用例 3&#xff1a;动态变化的规则 **1.2 逻辑矛盾测试**用例 1&#xff1a;同一…...

云原生攻防1(基础介绍)

什么是云原生 云原生是一套技术体系和方法论。 云:表示应用程序位于云中 原生:表示应用程序从设计之初就考虑到云的环境,原生为云而设计,在云上以最佳状态运行。 CNCF(Cloud Native Compute Foundation) 是 Linux 基金会旗下的一个组织,主要作用是在推动以容器为中心的…...

云原生环境下的事件驱动架构:理念、优势与落地实践

📝个人主页🌹:慌ZHANG-CSDN博客 🌹🌹期待您的关注 🌹🌹 一、引言:从服务调用到事件流动的转变 随着云原生技术的兴起,软件架构正经历一场深刻变革。传统以请求响应为主的服务通信方式,在高度动态、分布式的云原生环境下暴露出诸多挑战:强同步耦合、高延迟链路…...

InternLM 论文分类微调实践(XTuner 版)

1.环境安装 我创建开发机选择镜像为Cuda12.2-conda&#xff0c;选择GPU为100%A100的资源配置 Conda 管理环境 conda create -n xtuner_101 python3.10 -y conda activate xtuner_101 pip install torch2.4.0cu121 torchvision torchaudio --extra-index-url https://downloa…...

kotlin Flow的技术范畴

Flow 是 Kotlin 中的技术&#xff0c;准确地说&#xff0c;它是 Kotlin 协程&#xff08;Kotlin Coroutines&#xff09;库的一部分&#xff0c;属于 Kotlin 的 异步编程范畴。 ✅ Flow 的归属与背景&#xff1a; 所属技术&#xff1a;Kotlin&#xff08;由 JetBrains 开发&am…...

PyTorch图像建模(图像识别、分割和分类案例)

文章目录 图像分类技术&#xff1a;改变生活的智能之眼图形识别技术图像识别过程图像预处理图像特征提取 图像分割技术练习案例&#xff1a;图像分类项目源码地址实现代码&#xff08;简化版&#xff09;训练结果&#xff08;简化版&#xff09;实现代码&#xff08;优化版&…...

系统安全应用

文章目录 一.账号安全控制1.基本安全措施①系统账号清理②密码安全控制 2.用户切换与提权①su命令用法②PAM认证 3.sudo命令-提升执行权限①在配置文件/etc/sudoers中添加授权 二.系统引导和登录控制1.开关机安全控制①调整bios引导设置②限制更改grub引导参数 三.弱口令检测.端…...

day53—二分法—搜索旋转排序数组(LeetCode-81)

题目描述 已知存在一个按非降序排列的整数数组 nums &#xff0c;数组中的值不必互不相同。 在传递给函数之前&#xff0c;nums 在预先未知的某个下标 k&#xff08;0 < k < nums.length&#xff09;上进行了 旋转 &#xff0c;使数组变为 [nums[k], nums[k1], ..., nu…...

力扣面试150题--从前序与中序遍历序列构造二叉树

Day 43 题目描述 思路&#xff08;这题第一次没做出来&#xff0c;看了题解后理解&#xff09; 做法&#xff1a;哈希表递归 首先复习一下前序遍历和中序遍历&#xff0c; 前序遍历&#xff1a;中左右&#xff0c;这个不仅是遍历树的路线&#xff0c;同时对一个对于一个前序遍…...

win10 上删除文件夹失败的一个原因:sqlYog 备份/导出关联了该文件夹

在尝试删除路径为.../bak/sql的文件时&#xff0c;系统提示无权限操作。然而&#xff0c;关闭SQLyog后&#xff0c;删除操作成功完成。这表明SQLyog可能正在占用该文件&#xff0c;导致删除权限受限。关闭SQLyog后&#xff0c;文件被释放&#xff0c;删除操作得以顺利进行。建议…...

卷java、基础2

内部类 了解 1. 成员内部类&#xff08;了解&#xff09; 2. 静态内部类&#xff08;了解&#xff09; 实例化的写法 局部内部类&#xff08;看看就好&#xff09; 局部内部类是定义在在方法中、代码块中、构造器等执行体中。 匿名内部类&#xff08;重要&#xff09; 1.先…...

从 “龟速” 到流畅,英国 - 中国 SD-WAN 专线让分公司直连总部系统

对于在英国设立总部、国内开设分公司的企业而言&#xff0c;分公司访问总部内网系统常面临网络延迟高、连接不稳定等问题。传统网络方案难以满足跨国数据传输需求&#xff0c;而英国 - 中国 SD-WAN 国际组网专线凭借创新技术&#xff0c;为企业搭建起高效稳定的网络桥梁。 SD-W…...

C++--综合应用-演讲比赛项目

需求 分析 1、初始化&#xff0c;生成演讲学生数组&#xff0c;打乱数组以便随机分组 2、每轮比赛后需要统计每个学生的胜场&#xff0c;以便决定进入下一轮和最终胜利的学生 代码实现 #pragma once#include<iostream> #include<string> #include<algorithm…...

简单实现网页加载进度条

一、监听静态资源加载情况 可以通过window.performance 对象来监听⻚⾯资源加载进度。该对象提供了各种⽅法来获取资源加载的详细信息。 可以使⽤performance.getEntries() ⽅法获取⻚⾯上所有的资源加载信息。可以使⽤该⽅法来监测每个资源的加载状态&#xff0c;计算加载时间…...

C语言——深入理解指针(一)

C语言——指针&#xff08;一&#xff09; 进入指针后&#xff0c;C语言就有了一定的难度&#xff0c;我们需要认真理解 指针&#xff08;一&#xff09; 1 .内存和地址 内存&#xff1a;程序运行起来后&#xff0c;要加载到内存中&#xff0c;数据的存储也是在内存中。 我…...

计算机组织原理第一章

1、 2、 3、 4、 5、 从源程序到可执行文件&#xff1a; 6、 7、 8、 8、...

upload-labs通关笔记-第12关 文件上传之白名单GET法

目录 一、白名单过滤 二、%00截断 1、%00截断原理 2、空字符 3、截断条件 &#xff08;1&#xff09;PHP版本 < 5.3.4 &#xff08;2&#xff09;magic_quotes_gpc配置为Off &#xff08;3&#xff09;代码逻辑存在缺陷 三、源码分析 1、代码审计 &#xff08;1&…...

网络学习-epoll(四)

一、为什么使用epoll&#xff1f; 1、poll实质是对select的优化&#xff0c;解决了其参数限制的问题&#xff0c;但是其本质还是一个轮询机制。 2、poll是系统调用&#xff0c;当客户端连接数量较多时&#xff0c;会将大量的pollfd从用户态拷贝到内核态&#xff0c;开销较大。…...

uWSGI、IIS、Tomcat有啥区别?

uWSGI、IIS 和 Tomcat对比 以下是 uWSGI、IIS 和 Tomcat 的对比分析&#xff0c;包括它们的核心特性、适用场景和典型用例&#xff1a; 1. uWSGI 核心特性 • 定位&#xff1a;专为 Python 应用设计的应用服务器&#xff08;支持 WSGI/ASGI 协议&#xff09;。 • 协议支持&a…...

AI本地化服务的战略机遇与发展路径

一、市场机遇&#xff1a;线下商业的AI赋能真空 1. 需求侧痛点明确 实体商家面临线上平台25%-30%的高额抽成挤压利润&#xff0c;传统地推转化率不足5%&#xff0c;而AI驱动的精准营销可将获客成本降低60%以上。区域性服务商凭借对本地消费习惯的深度理解&#xff0c;能构建更精…...

游戏盾的功有哪些?

游戏盾的功能主要包括以下几方面&#xff1a; 一、网络攻击防护 DDoS攻击防护&#xff1a; T级防御能力&#xff1a;游戏盾提供分布式云节点防御集群&#xff0c;可跨地区、跨机房动态扩展防御能力和负载容量&#xff0c;轻松达到T级别防御&#xff0c;有效抵御SYN Flood、UD…...

C++开源库argh使用教程

概述 argh 是一个轻量级的 C 命令行参数解析库&#xff0c;只需要包含一个头文件即可使用。 github页面&#xff1a; https://github.com/adishavit/argh 基本用法 #include "argh.h" 创建argh::parser对象 使用parse方法解析命令行 argh::parser重载了括号运…...

万用表如何区分零线、火线、地线

普通验电笔只能区分火线&#xff0c;零线和地线是区分不出来的&#xff0c;那么&#xff0c;我们就需要使用万用表来进行区分&#xff01;轻松搞定&#xff01; 万用表操作步骤&#xff1a; 1、黑表笔插Com,红表笔接电压和电阻档&#xff0c;万用表打到交流电压750V档。 2、黑表…...

java配置webSocket、前端使用uniapp连接

一、这个管理系统是基于若依框架&#xff0c;配置webSocKet的maven依赖 <!--websocket--><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-websocket</artifactId></dependency> 二、配…...

01、java方法

前面与c都很相似&#xff0c;于是我决定从这一章开始复盘java的学习 一、方法 方法的好处主要体现在使用方便&#xff0c;可以在多处调用&#xff0c;不必反复造轮子 1、方法的使用 这就是一个简单的方法创建&#xff1a; public class java0517 {public static int ret(int …...

springboot实现幂等性

一 增加注解 import java.lang.annotation.*;Retention(RetentionPolicy.RUNTIME) Target({ElementType.METHOD}) Documented public interface ApiIdempotent { } 二 aop实现切面 import cn.hutool.extra.spring.SpringUtil; import com.alibaba.fastjson.JSONObject; import…...

Flink 快速入门

本文涉及到大量的底层原理知识&#xff0c;包括运行机制图解都非常详细&#xff0c;还有一些实战案例&#xff0c;所以导致本篇文章会比较长&#xff0c;内容比较多&#xff0c;由于内容太多&#xff0c;很多目录可能展示不出来&#xff0c;需要去细心的查看&#xff0c;非常适…...

MySQL 8.0 OCP 英文题库解析(五)

Oracle 为庆祝 MySQL 30 周年&#xff0c;截止到 2025.07.31 之前。所有人均可以免费考取原价245美元的MySQL OCP 认证。 从今天开始&#xff0c;将英文题库免费公布出来&#xff0c;并进行解析&#xff0c;帮助大家在一个月之内轻松通过OCP认证。 本期公布试题31~40 试题31:…...

lovart design 设计类agent的系统提示词解读

文章目录 lovart 设计agent介绍角色定义工作规范工具调用任务复杂度指南任务移交指南其他ref lovart 设计agent介绍 lovart作为设计agent&#xff0c;产品功能包括&#xff1a; 全链路设计能力&#xff1a;可以快速生成完整的品牌视觉方案&#xff0c;包括标志、配色、品牌规范…...

C++11特性

一.C的发展历史 C11是C的第二个主要版本&#xff0c;从C98起的重要更新&#xff0c;引入了大量更改&#xff0c;从C11起C规律的进行每3年更新一次。 二.列表初始化 2.1 C98和C11中的 { } 传统的C98中使用 { } 来进行列表初始化&#xff0c;结构体函数体都使用此类方法&…...