主机名称 | 操作系统 | 说明 |
---|---|---|
192.168.11.82 | Ubuntu 22.04 | 主库所在服务器 |
192.168.11.28 | Oracle Linux Server 8.7 | 从库所在服务器 |
MySQL版本:MySQL_8.0.32 Canal版本:Canal_1.1.7 //我的canal安装部署在192.168.11.82上,当然你也可以部署在其它的服务器上 Java版本:1.8.0.362
cd /etc/mysql/mysql.conf.d vi mysqld.cnf
vi /etc/my.cnf
# 服务编号, 与其它节点不冲突即可 server_id=1 log_bin=binlog binlog_format=ROW
关于canal简介,这里就不再阐述,具体可以参看官方文档介绍,地址如下:
https://github.com/alibaba/canal/wiki/简介
systemctl start mysql 或者 systemctl start mysqld.service Ubuntu 使用前者
mysql> CREATE USER canal IDENTIFIED BY 'canal'; mysql> GRANT SELECT, SHOW VIEW, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%'; mysql> FLUSH PRIVILEGES;
cd /usr/local mkdir canal cd canal mkdir canal-package canal-adapter canal-deployer
https://github.com/alibaba/canal/releases/tag/canal-1.1.7-alpha-2
只需要下载标红框的两个文件即可。复制到canal-package文件夹下。
解压
tar -zxvf canal.adapter-1.1.7-SNAPSHOT.tar.gz -C /usr/local/canal/canal-adapter tar -zxvf canal.deployer-1.1.7-SNAPSHOT.tar.gz -C /usr/local/canal/canal-deployer
由于此次同步为MySQL数据库间的数据同步,所以只需修改 instance.properties 即可。
cd /usr/local/canal/canal-deployer/conf/example vi instance.properties
修改内容如下:
# 修改说明 第一个框:主库所在服务器IP 第二个框:主库数据库账号密码 第三个框:规则如下: instance.properties中同步数据表默认为同步数据库下所有的表信息,具体配置如图第三个框: # 若需要同步某几张表,可以参考如下配置: # 同步某数据库test1下的user表,test2数据库下的所有表,所有库下所有表数据 canal.instance.filter.regex=test1.user,test2\\..*,.*\\..*
cd /usr/local/canal/canal-deployer/bin ./startup.sh
查看日志确定是否启动成功
cd /usr/local/canal/canal-deployer/logs/example cat example.log
我遇到的几个错误情况,仅供参考:
canal-deployer启动之后,如果在 logs 文件夹下没有 example 文件,参考如下情况说明:
1、查看 /usr/local/canal/canal-deployer/bin 文件夹下,是否存在.pid的文件。
2、查看logs文件夹下的canal文件夹下的canal_stdout.log文件,命令如下:
cat /usr/local/canal/canal-deployer/logs/canal/canal_stdout.log
若出现如下信息:
解决方案:(在此强烈建议系统中只安装jdk8或者jdk11,切不可采用jenv管理jdk,安装多个版本)
cd /usr/local/canal/canal-deployer/bin ./stop.sh vi startup.sh # 删除报错信息包含的参数(该错误信息可能会出现多次,出现哪个删除哪个即可) ./startup.sh
重新启动:
cd /usr/local/canal/canal-deployer/bin ./startup.sh
直到出现以下信息:
# 打开日志文件 cat /usr/local/canal/canal-deployer/logs/example/example.log # 出现以下信息说明canal-deployer启动成功 [destination = example , address = /192.168.11.82:3306 , EventParser] WARN c.a.o.c.p.inbound.mysql.rds.RdsBinlogEventParserProxy - ---> find start position successfully, EntryPosition[included=false,journalName=binlog.000040,position=65224673,serverId=1,gtid=,timestamp=1682062760000] cost : 1331ms , the next step is binlog dump
cd /usr/local/canal/canal-adapter/conf vi application.yml
server: port: 8081 spring: jackson: date-format: yyyy-MM-dd HH:mm:ss time-zone: GMT+8 default-property-inclusion: non_null canal.conf: mode: tcp #tcp kafka rocketMQ rabbitMQ flatMessage: true zookeeperHosts: syncBatchSize: 1000 retries: -1 timeout: accessKey: secretKey: consumerProperties: # canal tcp consumer # 修改位置1:Canal-deployer所在主机IP canal.tcp.server.host: 127.0.0.1:11111 canal.tcp.zookeeper.hosts: canal.tcp.batch.size: 500 canal.tcp.username: canal.tcp.password: # kafka consumer kafka.bootstrap.servers: 127.0.0.1:9092 kafka.enable.auto.commit: false kafka.auto.commit.interval.ms: 1000 kafka.auto.offset.reset: latest kafka.request.timeout.ms: 40000 kafka.session.timeout.ms: 30000 kafka.isolation.level: read_committed kafka.max.poll.records: 1000 # rocketMQ consumer rocketmq.namespace: rocketmq.namesrv.addr: 127.0.0.1:9876 rocketmq.batch.size: 1000 rocketmq.enable.message.trace: false rocketmq.customized.trace.topic: rocketmq.access.channel: rocketmq.subscribe.filter: # rabbitMQ consumer rabbitmq.host: rabbitmq.virtual.host: rabbitmq.username: rabbitmq.password: rabbitmq.resource.ownerId: # 修改位置:添加源库配置信息,此处为同步同库下所有表信息 srcDataSources: defaultDS: url: jdbc:mysql://192.168.11.82:3306/mynet?useUnicode=true&characterEncoding=utf8&autoReconnect=true&useSSL=false username: ymliu password: ymliu2023 canalAdapters: - instance: example # canal instance Name or mq topic name groups: - groupId: g1 outerAdapters: - name: logger - name: rdb key: mysql1 properties: jdbc.driverClassName: com.mysql.jdbc.Driver jdbc.url: jdbc:mysql://192.168.11.28:3306/mynet?useUnicode=true&characterEncoding=utf8&autoReconnect=true&useSSL=false jdbc.username: ymliu jdbc.password: ymliu2023 druid.stat.enable: false druid.stat.slowSqlMillis: 1000 # - name: rdb # key: oracle1 # properties: # jdbc.driverClassName: oracle.jdbc.OracleDriver # jdbc.url: jdbc:oracle:thin:@localhost:49161:XE # jdbc.username: mytest # jdbc.password: m121212 # - name: rdb # key: postgres1 # properties: # jdbc.driverClassName: org.postgresql.Driver # jdbc.url: jdbc:postgresql://localhost:5432/postgres # jdbc.username: postgres # jdbc.password: 121212 # threads: 1 # commitSize: 3000 # - name: hbase # properties: # hbase.zookeeper.quorum: 127.0.0.1 # hbase.zookeeper.property.clientPort: 2181 # zookeeper.znode.parent: /hbase # - name: es # hosts: 127.0.0.1:9300 # 127.0.0.1:9200 for rest mode # properties: # mode: transport # or rest # # security.auth: test:123456 # only used for rest mode # cluster.name: elasticsearch # - name: kudu # key: kudu # properties: # kudu.master.address: 127.0.0.1 # ',' split multi address # - name: phoenix # key: phoenix # properties: # jdbc.driverClassName: org.apache.phoenix.jdbc.PhoenixDriver # jdbc.url: jdbc:phoenix:127.0.0.1:2181:/hbase/db # jdbc.username: # jdbc.password:
cd /usr/local/canal/canal-adapter/conf/rdb vi mytest_user.yml
dataSourceKey: defaultDS # 源数据源的key, 对应上面配置的srcDataSources中的值 destination: example # cannal的instance或者MQ的topic groupId: g1 # 对应MQ模式下的groupId, 只会同步对应groupId的数据 outerAdapterKey: mysql1 # adapter key, 对应上面配置outAdapters中的key concurrent: true # 是否按主键hash并行同步, 并行同步的表必须保证主键不会更改及主键不能为其他同步表的外键! dbMapping: database: test # 源数据源的database/schema table: user # 源数据源表名 targetTable: test.user # 目标数据源的库名.表名 targetPk: # 主键映射 id: id # 如果是复合主键可以换行映射多个 mapAll: true # 是否整表映射, 要求源表和目标表字段名一模一样 (如果targetColumns也配置了映射, 则以targetColumns配置为准) #targetColumns: # 字段映射, 格式: 目标表字段: 源表字段, 如果字段名一样源表字段名可不填 # id: # name: # role_id: # c_time: # test1:
dataSourceKey: defaultDS destination: example groupId: g1 outerAdapterKey: mysql1 concurrent: true dbMapping: mirrorDb: true database: test # 该数据库名称修改为你的数据库名称
cd /usr/local/canal/canal-adapter/bin ./startup.sh
出现错误,排查方式同canal-deployer
查看日志信息
cd /usr/local/canal/canal-adapter/logs/adapter cat adapter.log
1、只有canal-adapter成功启动并正确连接canal-deployer后,canal-deployer才会读取binlog信息。/usr/local/canal/canal-deployer/conf/example 文件夹下才会出现meta.dat文件。
2、canal-adapter启动以后会出现127.0.0.1:3306/canal_manage连接失败信息,该信息是因为我们没有安装启动前端页面所致,可以忽略,不影响数据同步。
3、以上所有内容均为自己实操结果。