Redis教程

docker-compose部署emqx、minio、redis、nacos、kafka集群

本文主要是介绍docker-compose部署emqx、minio、redis、nacos、kafka集群,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

docker-compose部署minio集群

参考(https://blog.csdn.net/kea_iv/article/details/108061337)

创建一个新的文件夹,并在文件夹下创建一个docker-compose.yaml
vi docker-compose.yaml
把一下内容复制进去

#docker-compose.yaml
version: '3.7'

# starts 4 docker containers running minio server instances. Each
# minio server's web interface will be accessible on the host at port
# 9001 through 9004.
services:
  minio1:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - data1-1:/data1
      - data1-2:/data2
    ports:
      - "9001:9000"
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio2:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - data2-1:/data1
      - data2-2:/data2
    ports:
      - "9002:9000"
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio3:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - data3-1:/data1
      - data3-2:/data2
    ports:
      - "9003:9000"
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio4:
    image: minio/minio:RELEASE.2020-08-08T04-50-06Z
    volumes:
      - data4-1:/data1
      - data4-2:/data2
    ports:
      - "9004:9000"
    environment:
      MINIO_ACCESS_KEY: minio
      MINIO_SECRET_KEY: minio123
    command: server http://minio{1...4}/data{1...2}
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

## By default this config uses default local driver,
## For custom volumes replace with volume driver configuration.
volumes:
  data1-1:
  data1-2:
  data2-1:
  data2-2:
  data3-1:
  data3-2:
  data4-1:
  data4-2:
  1. 在此文件夹下,执行docker-compose命令,创建集群环境
    docker-compose up -d
  2. 启动成功后,可以查看docker当前运行的容器,有4个minio容器,端口分别为9001,9002,9003,9004

创建的时候可能会提示错误,是因为版本不匹配,只需要把yaml文件的版本改成匹配的版本即可。此处改成 version: ‘3.2’
========================================================
ERROR: Version in "./docker-compose.yaml" is unsupported. You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version ("2.0", "2.1", "3.0", "3.1", "3.2") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1.
For more on the Compose file format versions, see https://docs.docker.com/compose/compose-file/

docker-compose部署redis

参考(https://blog.csdn.net/weixin_46129192/article/details/124072784)

先创建网桥:
docker network create -d bridge --subnet=172.38.0.0/16 redis

在执行以下脚本配置环境和部署容器

#通过脚本一次创建6个redis配置
for port in $(seq 1 6); \
do \
mkdir -p /mydata/redis/node-${port}/conf
touch /mydata/redis/node-${port}/conf/redis.conf
cat << EOF >/mydata/redis/node-${port}/conf/redis.conf
port 6379
bind 0.0.0.0
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000
cluster-announce-ip 172.38.0.1${port}
cluster-announce-port 6379
cluster-announce-bus-port 16379
appendonly yes
EOF
# 通过脚本一次启动6个redis容器
docker run -p 637${port}:6379 -p 1637${port}:16379 --name redis-${port} \
-v /mydata/redis/node-${port}/data:/data \
-v /mydata/redis/node-${port}/conf/redis.conf:/etc/redis/redis.conf \
-d --net redis --ip 172.38.0.1${port} redis:5.0.9-alpine3.11 redis-server /etc/redis/redis.conf
done

创建成功

查看创建的容器

docker-compose部署emqx+nginx

参考(https://www.cnblogs.com/innocenter/p/16175989.html)
拉取镜像
docker pull emqx/emqx;

创建虚拟网络
docker network create -d bridge --subnet=172.18.0.0/16 emqx_bridge

启动服务
节点1
docker run -d --hostname emqx01 --name emqx01 --network emqx_bridge --ip 172.18.0.2 -p 60001:1883 -p 60004:8083 -p 60007:8883 -p 60010:8084 -p 60017:18083 -p 60013:11883 -v /etc/localtime:/etc/localtime:ro emqx/emqx:latest
节点2
docker run -d --hostname emqx02 --name emqx02 --network emqx_bridge --ip 172.18.0.3 -p 60002:1883 -p 60005:8083 -p 60008:8883 -p 60011:8084 -p 60018:18083 -p 60014:11883 -v /etc/localtime:/etc/localtime:ro emqx/emqx:latest
节点3
docker run -d --hostname emqx03 --name emqx03 --network emqx_bridge --ip 172.18.0.4 -p 60003:1883 -p 60006:8083 -p 60009:8883 -p 60012:8084 -p 60019:18083 -p 60015:11883 -v /etc/localtime:/etc/localtime:ro emqx/emqx:latest

配置集群归属
节点2

docker exec -it emqx02 sh

bin/emqx_ctl  cluster join emqx01@172.18.0.2

exit

节点3

docker exec -it emqx03 sh

bin/emqx_ctl  cluster join emqx01@172.18.0.2

bin/emqx_ctl cluster status

exit

负载均衡
拉取镜像
Docker pull nginx
启动项目
docker run --name nginx -p 80:80 -d nginx
本地映射配置文件

mkdir -p /data/nginx

mkdir -p /data/nginx/www

mkdir -p /data/nginx/conf

mkdir -p /data/nginx/logs

复制配置文件到主机

如果通过docker ps查看的容器ID不行就通过以下命令查询容器长id:docker inspect -f '{{.ID}}' 容器名称

docker cp 容器id:/etc/nginx/nginx.conf /data/nginx/

docker cp 容器id:/etc/nginx/conf.d /data/nginx/conf/

docker cp 容器id:/usr/share/nginx/html/ /data/nginx/www/

docker cp 容器id:/var/log/nginx/ /data/nginx/logs/

移除之前的服务
docker stop 容器id docker rm 容器id

配置文件编辑
/data/nginx/conf/default.conf此处文件配置不变

/data/nginx/nginx.conf 配置文件进行修改

user  nginx;

worker_processes  1;

error_log  /var/log/nginx/error.log warn;

pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

stream{

        # emqx tcp

        upstream emqxTcp {

                #hash $remote_addr consistent;

                server 192.168.86.63:60001 max_fails=3 fail_timeout=30s;

                server 192.168.86.63:60002 max_fails=3 fail_timeout=30s;

                server 192.168.86.63:60003 max_fails=3 fail_timeout=30s;
        }
        # emqx tcp server
        server {

                listen 1883;

                #proxy_timeout 180s;

                proxy_pass emqxTcp;
        }
}

http {

    include       /etc/nginx/mime.types;

    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

                      '$status $body_bytes_sent "$http_referer" '

                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;

    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf; #嵌套

    upstream stream_backend {

          server 192.168.86.63:60017 max_fails=2 fail_timeout=30s;

          server 192.168.86.63:60018 max_fails=2 fail_timeout=30s;

          server 192.168.86.63:60019 max_fails=2 fail_timeout=30s;

      }

    server {
        listen    80;
        server_name   localhost;
        location / {
            proxy_pass  http://stream_backend;
        }
    }   
}

启动文件

docker run --name nginx -p 80:80 -p 1883:1883 --network emqx_bridge --ip 172.18.0.6   -v /data/nginx/nginx.conf:/etc/nginx/nginx.conf  -v /data/nginx/www/:/usr/share/nginx/html/  -v /data/nginx/logs/:/var/log/nginx/  -v  /data/nginx/conf/:/etc/nginx/conf.d  --privileged=true -d nginx

页面展示

192.168.86.63:80
Nginx跳转随机切换emqx(目前采用的轮询策略)

docker-compose部署kafka集群

参考(https://www.cnblogs.com/xuwenjin/p/14917360.html)

创建一个文件夹,在文件里面创建一个docker-compose.yaml,把下面内容复制进去

version: '3.3'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    container_name: zookeeper
    ports:
      - 2181:2181
    volumes:
      - /data/zookeeper/data:/data
      - /data/zookeeper/datalog:/datalog
      - /data/zookeeper/logs:/logs
    restart: always
  kafka1:
    image: wurstmeister/kafka
    depends_on:
      - zookeeper
    container_name: kafka1
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 192.168.86.63:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.86.63:9092
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
      KAFKA_LOG_DIRS: /data/kafka-data
      KAFKA_LOG_RETENTION_HOURS: 24
    volumes:
      - /data/kafka1/data:/data/kafka-data
    restart: unless-stopped  
  kafka2:
    image: wurstmeister/kafka
    depends_on:
      - zookeeper
    container_name: kafka2
    ports:
      - 9093:9093
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: 192.168.86.63:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.86.63:9093
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9093
      KAFKA_LOG_DIRS: /data/kafka-data
      KAFKA_LOG_RETENTION_HOURS: 24
    volumes:
      - /data/kafka2/data:/data/kafka-data
    restart: unless-stopped
  kafka3:
    image: wurstmeister/kafka
    depends_on:
      - zookeeper
    container_name: kafka3
    ports:
      - 9094:9094
    environment:
      KAFKA_BROKER_ID: 3
      KAFKA_ZOOKEEPER_CONNECT: 192.168.86.63:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.86.63:9094
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9094
      KAFKA_LOG_DIRS: /data/kafka-data
      KAFKA_LOG_RETENTION_HOURS: 24
    volumes:
      - /data/kafka3/data:/data/kafka-data
    restart: unless-stopped

参数说明:

  • KAFKA_ZOOKEEPER_CONNECT: zookeeper服务地址
  • KAFKA_ADVERTISED_LISTENERS: kafka服务地址
  • KAFKA_BROKER_ID: kafka节点ID,不可重复
  • KAFKA_LOG_DIRS: kafka数据文件地址(非必选。下面的volumes会将容器内文件挂载到宿主机上)
  • KAFKA_LOG_RETENTION_HOURS: 数据文件保留时间(非必选。缺省168小时)
  1. 启动
    docke-compose up -d

    出现如上结果表示启动成功

docker-compose部署nacos集群

参考()

这篇关于docker-compose部署emqx、minio、redis、nacos、kafka集群的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!