进阶第一节当然是在linux环境下安装RAC集群数据库啦!!!
提示:以下是本篇文章正文内容,下面案例可供参考(注意标题所提示操作的节点)
由于是vmware的虚拟机环境,需要配置系统的配置文件
#节点1、2的vxm文件添加: disk.EnableUUID="TRUE" disk.locking = "FALSE" scsi1.shared = "TRUE" diskLib.dataCacheMaxSize = "0" diskLib.dataCacheMaxReadAheadSize = "0" diskLib.dataCacheMinReadAheadSize = "0" diskLib.dataCachePageSize= "4096" diskLib.maxUnsyncedWrites = "0" scsi1.present = "TRUE" scsi1.virtualDev = "lsilogic" scsi1.sharedBus = "VIRTUAL"
可参考本人文章:
https://blog.csdn.net/weixin_41607523/article/details/110797695?spm=1001.2014.3001.5501
mkdir software [root@localhost software]# yum install oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm [root@localhost software]# yum install oracle-database-preinstall-19c-1.0-1.el7.x86_64.rpm
若无法进行rpm的安装操作,请参照:https://blog.csdn.net/weixin_41607523/article/details/110482175?spm=1001.2014.3001.5501
第一章环境搭建,进行创建用户、依赖包、修改内核参数以及软件限制等。
yum install -y binutils yum install -y compat-libcap1 yum install -y compat-libstdc++-33 yum install -y compat-libstdc++-33.i686 yum install -y gcc yum install -y gcc-c++ yum install -y glibc yum install -y glibc.i686 yum install -y glibc-devel yum install -y glibc-devel.i686 yum install -y ksh yum install -y libgcc yum install -y libgcc.i686 yum install -y libstdc++ yum install -y libstdc++.i686 yum install -y libstdc++-devel yum install -y libstdc++-devel.i686 yum install -y libaio yum install -y libaio.i686 yum install -y libaio-devel yum install -y libaio-devel.i686 yum install -y libXext yum install -y libXext.i686 yum install -y libXtst yum install -y libXtst.i686 yum install -y libX11 yum install -y libX11.i686 yum install -y libXau yum install -y libXau.i686 yum install -y libxcb yum install -y libxcb.i686 yum install -y libXi yum install -y libXi.i686 yum install -y make yum install -y sysstat yum install -y unixODBC yum install -y unixODBC-devel yum install -y readline yum install -y libtermcap-devel yum install -y bc yum install -y compat-libstdc++ yum install -y elfutils-libelf yum install -y elfutils-libelf-devel yum install -y fontconfig-devel yum install -y libXi yum install -y libXtst yum install -y libXrender yum install -y libXrender-devel yum install -y libgcc yum install -y librdmacm-devel yum install -y libstdc++ yum install -y libstdc++-devel yum install -y net-tools yum install -y nfs-utils yum install -y python yum install -y python-configshell yum install -y python-rtslib yum install -y python-six yum install -y targetcli yum install -y smartmontools
注:rhel7还需单独安装一个独立包rpm -ivh compat-libstdc+±33-3.2.3-72.el7.x86_64.rpm
[root@localhost software]# groupadd -g 54332 asmdba [root@localhost software]# groupadd -g 54331 asmadmin [root@localhost software]# groupadd -g 54333 asmoper [root@localhost software]# useradd -u 54322 -g oinstall -G dba,oper,asmadmin,asmdba,asmoper,racdba grid [root@localhost software]# usermod -a -G asmdba oracle #注意修改密码
[root@localhost software]# cd /etc/security/limits.d/ [root@localhost limits.d]# pwd /etc/security/limits.d [root@localhost limits.d]# vi oracle-database-preinstall-19c.conf grid soft nofile 1024 grid hard nofile 65536 grid soft nproc 16384 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 grid hard memlock 134217728 grid soft memlock 134217728
注意以上修改格式是否可以生效,需重新su – 用户查看。
查看各用户参数ulimit -a
[root@localhost limits.d]# vi /etc/hosts 192.168.221.88 rac1 192.168.221.99 rac2 10.10.10.88 rac1-priv 10.10.10.99 rac2-priv 192.168.221.123 rac1-vip 192.168.221.124 rac2-vip 192.168.221.125 rac-scanip [root@localhost limits.d]# hostname rac1 或 hostnamectl set-hostname rac1 [root@localhost limits.d]# hostname rac1 [root@localhost limits.d]# exit
控制面板>网络和 Internet>网络连接> VMware Network Adapter VMnet1
[root@rac1 network-scripts]# systemctl restart network
[root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb 36000c29473a235055b42ace68faa51fe [root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc 36000c297a674cbf99c29bce35b598bf6 [root@rac1 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd 36000c29cc898162b85ed20bd99c3f5fc
[root@rac1 ~]# vi /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sdb", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c29473a235055b42ace68faa51fe", SYMLINK+="asm-ocrdisk1", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sdc", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c297a674cbf99c29bce35b598bf6", SYMLINK+="asm-ocrdisk2", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sdd", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c29cc898162b85ed20bd99c3f5fc", SYMLINK+="asm-ocrdisk3", OWNER="grid", GROUP="asmadmin", MODE="0660" KERNEL=="sde", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$name", RESULT=="36000c290242af74a235cc4aa010055e9", SYMLINK+="asm-ocrdiskdata", OWNER="grid", GROUP="asmadmin", MODE="0660"
[root@rac1 ~]# udevadm control --reload-rules [root@rac1 ~]# udevadm trigger [root@rac1 ~]# ll /dev/asm*
查看属组权限:
ls -l /dev/sd*
节点2进行文件拷贝:
[root@rac2 ~]# scp rac1:/etc/udev/rules.d/99-oracle-asmdevices.rules /etc/udev/rules.d/99-oracle-asmdevices.rules
mkdir -p /u01/app/oracle/product/19c/dbhome_1 mkdir -p /u01/grid/product/19c/gridhome_1 mkdir -p /u01/gridbase chown -R oracle.oinstall /u01/ chown -R grid.oinstall /u01/grid*
节点1、2设置grid和oracle用户的环境变量,这里只记录第一个节点的相关信息,第二节点除ORACLE_SID不一样之外,其他都一样:
节点1oracle: export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/19c/dbhome_1 export ORACLE_SID=orcl1 export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 umask 022 节点1grid: export ORACLE_SID=+ASM1 export ORACLE_BASE=/u01/gridbase export ORACLE_HOME=/u01/grid/product/19c/gridhome_1 export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin umask 022 节点2oracle: export ORACLE_BASE=/u01/app/oracle export ORACLE_HOME=$ORACLE_BASE/product/19c/dbhome_1 export ORACLE_SID=orcl2 export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin export NLS_LANG=AMERICAN_AMERICA.AL32UTF8 umask 022 节点2grid: export ORACLE_SID=+ASM2 export ORACLE_BASE=/u01/gridbase export ORACLE_HOME=/u01/grid/product/19c/gridhome_1 export PATH=.:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH:$HOME/bin umask 022 [grid@rac1 ~]$ source .bash_profile [grid@rac2 ~]$ source .bash_profile
[root@rac1 ~]# vi .bash_profile #添加gird的用户ORACLE_HOME路径 PATH=$PATH:/u01/grid/product/19c/gridhome_1/bin:$HOME/bin
节点1 oracle用户: [oracle@rac1 dbhome_1]$ pwd /u01/app/oracle/product/19c/dbhome_1 [oracle@rac1 dbhome_1]$ ls LINUX.X64_193000_db_home.zip [oracle@rac1 dbhome_1]$ unzip LINUX.X64_193000_db_home.zip 节点1 grid用户: [grid@rac1 dbhome_1]$ cd /u01/grid/product/19c/gridhome_1/ [grid@rac1 gridhome_1]$ ls LINUX.X64_193000_grid_home.zip [grid@rac1 gridhome_1]$ unzip LINUX.X64_193000_grid_home.zip
节点1、2都需要安装cvuqdisk-1.0.10-1.rpm包,这个包linux的光盘内并不包含,需要到解压后的grid的安装文件中去找,在cv目录下面的rpm目录里面。
[root@rac1 ~]# cd /u01/grid/product/19c/gridhome_1/cv/rpm/ [root@rac1 rpm]# ls cvuqdisk-1.0.10-1.rpm [root@rac1 rpm]# yum install cvuqdisk-1.0.10-1.rpm #rpm拷贝至节点2: [root@rac1 rpm]# scp cvuqdisk-1.0.10-1.rpm 10.10.10.99:/u01/ [root@rac2 u01]# yum install cvuqdisk-1.0.10-1.rpm
[root@rac2 u01]# systemctl disable ntpd.service [root@rac2 u01]# systemctl stop ntpd.service [root@rac2 u01]# mv /etc/ntp.conf /etc/ntp.conf.orig [root@rac2 u01]# systemctl status ntpd ● ntpd.service - Network Time Service Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled) Active: inactive (dead) [root@rac2 u01]# timedatectl list-timezones |grep Shanghai Asia/Shanghai [root@rac2 u01]# timedatectl set-timezone Asia/Shanghai
[root@rac2 u01]# systemctl disable avahi-daemon.socket Removed symlink /etc/systemd/system/sockets.target.wants/avahi-daemon.socket. [root@rac2 u01]# systemctl disable avahi-daemon.service Removed symlink /etc/systemd/system/multi-user.target.wants/avahi-daemon.service. Removed symlink /etc/systemd/system/dbus-org.freedesktop.Avahi.service.
#kill ps -ef|grep avahi-daemon kill -9 pid avahi-daemon
[root@rac2 u01]# vi /etc/default/grub #文末添加:GRUB_CMDLINE_LINUX="rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet transparent_hugepage=never"
[root@rac2 u01]# grub2-mkconfig -o /boot/grub2/grub.cfg Generating grub configuration file ... Found linux image: /boot/vmlinuz-3.10.0-957.el7.x86_64 Found initrd image: /boot/initramfs-3.10.0-957.el7.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-c1e3e15a96c847918efb7bf5b02166d3 Found initrd image: /boot/initramfs-0-rescue-c1e3e15a96c847918efb7bf5b02166d3.img Done
不重启生效:
[root@rac2 u01]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
查看状态:
[root@rac2 u01]# cat /sys/kernel/mm/transparent_hugepage/enabled always madvise [never] [root@rac2 u01]# grep AnonHugePages /proc/meminfo AnonHugePages: 0 kB
返回值为0则代表成功
[root@rac1 ~]# systemctl stop firewalld.service [root@rac2 ~]# systemctl stop firewalld.service systemctl stop firewalld.service #停止firewall systemctl disable firewalld.service #禁止firewall开机启动
[root@rac1 ~]# xhost + access control disabled, clients can connect from any host [grid@rac1 gridhome_1]$ DISPLAY=10.10.10.1:0.0 [grid@rac1 gridhome_1]$ export DISPLAY [grid@rac1 gridhome_1]$ ./gridSetup.sh 正在启动 Oracle Grid Infrastructure 安装向导...
操作:更改属主信息vi /etc/udev/rules.d/99-oracle-asmdevices.rules
[root@rac1 gridbase]# ls -l /dev/sd*
[root@localhost software]# cd /etc/security/limits.d/ [root@localhost limits.d]# pwd /etc/security/limits.d [root@localhost limits.d]# vi oracle-database-preinstall-19c.conf grid soft nofile 1024 grid hard nofile 65536 grid soft nproc 16384 grid hard nproc 16384 grid soft stack 10240 grid hard stack 32768 grid hard memlock 134217728 grid soft memlock 134217728
[root@rac1 u01]# cd /u01/grid/product/19c/gridhome_1/bin/ #查看集群状态: [root@rac1 bin]# ./crsctl stat res -t
注:以下脚本需要先在rac1执行,完成后到rac2执行。
脚本1: [root@rac1 gridhome_1]# cd /u01/gridbase/oraInventory/ [root@rac1 oraInventory]# ls ContentsXML logs oraInst.loc orainstRoot.sh [root@rac1 oraInventory]# sh orainstRoot.sh
脚本2: [root@rac1 app]# cd /u01/grid/product/19c/gridhome_1 [root@rac1 gridhome_1]# sh root.sh
[grid@rac1 ~]$ asmca
[oracle@rac1 ~]$ cd $ORACLE_HOME [oracle@rac1 dbhome_1]$ ./runInstaller [oracle@rac1 dbhome_1]$ DISPLAY=10.10.10.1:0.0 [oracle@rac1 dbhome_1]$ export DISPLAY
节点1: [root@rac1 ~]# cd /u01/app/oracle/product/19c/dbhome_1/ [root@rac1 dbhome_1]# sh root.sh
节点2: [root@rac2 ~]# cd /u01/app/oracle/product/19c/dbhome_1/ [root@rac2 dbhome_1]# sh root.sh
[oracle@rac1 dbhome_1]$ dbca
节点1、 2都需进行启动(根据实际情况) [root@rac1 bin]# cd /u01/grid/product/19c/gridhome_1/bin [root@rac1 bin]# ./srvctl start asm
查看集群状态 crsctl stat res -t 1、停止节点集群服务,必须以root用户: [root@rac1 oracle]# cd /u01/grid/11.2.0/grid/bin [root@rac1 bin]# ./crsctl stop cluster ----停止本节点集群服务 [root@rac1 bin]# ./crsctl stop cluster -all ---停止所有节点服务 也可以如下控制所停节点: [root@rac1 bin]# crsctl stop cluster -n rac1 rac2 2、检查本节点的集群状态 [root@rac1 bin]# ./crsctl check crs 3、所有节点启动 [root@rac1 bin]# ./crsctl start cluster -n rac1 rac2 [root@rac1 bin]# ./crsctl start cluster -all [root@rac2 ~]# ./crsctl check cluster
停库: oracle用户下操作(任意一个节点执行即可) srvctl stop database -d orcl -o immediate 启库: srvctl start database -d orcl