The series of post will install a 19c (19.3) RAC database on two Enterprise Linux 8.1. There're 4 parts in this series.
- Preparing Two Database Servers (1 of 4)
- Installing Grid Infrastructure (2 of 4)
- Installing Database Software (3 of 4)
- Creating a RAC Database (4 of 4)
This post is the first part.
Assuming that you have already prepared a shared storage for RAC databases, so we start the series from preparing 2 database servers. For better preparing 2 nodes for database servers, we list the necessary items to check.
- Network
- Security
- Time
- Packages
- Users, Groups and Directories
- Storage
- Software
- Verification
- More Fixes
Network
Let's see the network design of our cluster database by listing /etc/hosts.
[root@primary01 ~]# vi /etc/hosts
# Public
192.168.10.11 primary01 primary01.example.com
192.168.10.12 primary02 primary02.example.com
# Private
192.168.24.11 primary01-priv primary01-priv.example.com
192.168.24.12 primary02-priv primary02-priv.example.com
# VIP
192.168.10.111 primary01-vip primary01-vip.example.com
192.168.10.112 primary02-vip primary02-vip.example.com
# SCAN
# 192.168.10.81 primary-cluster-scan primary-cluster-scan.example.com
# 192.168.10.82 primary-cluster-scan primary-cluster-scan.example.com
# 192.168.10.83 primary-cluster-scan primary-cluster-scan.example.com
# NAS
192.168.10.101 nas nas.example.com
# DNS
192.168.10.199 dns dns.example.com
Please note that, the format of every entry in /etc/hosts should be:
The network design has been summarized as below:
- Two nodes for RAC servers: primary01 and primary02.
- One NAS for shared storage.
- One DNS for resolving hostname, especially for SCAN name.
- Two network cards, one is for public subnet, another is for private subnet.
- Public subnet (192.168.10.0/24): only for public connection. Later, all virtual IP like SCAN and VIP will dynamically bind with this NIC after running CRS.
- Private subnet (192.168.24.0/24): for both private and ASM connections. This NIC is mainly for node interconnection, not for outside access.
Please note that, you can only have local DNS server(s) listed in NIC configuration, otherwise, OUI of grid infrastructure will complain about it.
Now, I have two questions for you. You can make a guess before clicking the post.
Note that, public IP requires broadcast.
Note that, private IP requires multicast.
Same hosts configuration must also appear in primary02 according to our design.
Security
We stopped firewall and selinux on both nodes to make RAC installation more smoothly.
[root@primary01 ~]# systemctl stop firewalld
[root@primary01 ~]# systemctl disable firewalld
[root@primary01 ~]# vi /etc/selinux/config
...
SELINUX=disabled
If you couldn't or wouldn't disable firewall, then you may also open port 1521 on Linux for your listener.
Time
Time synchronization is pretty important for a RAC database, if time in one node deviates from the other node's, CRS may decide to abandon it. Please make sure NTP is working well on both nodes. Here we use the default NTP service chronyd to play the role.
[root@primary01 ~]# systemctl status chronyd
● chronyd.service - NTP client/server
Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-09-28 12:58:51 CST; 3min 22s ago
Docs: man:chronyd(8)
man:chrony.conf(5)
Process: 1038 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
Process: 1020 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 1028 (chronyd)
Tasks: 1 (limit: 23830)
Memory: 3.3M
CGroup: /system.slice/chronyd.service
└─1028 /usr/sbin/chronyd
Sep 28 12:58:51 primary01.example.com chronyd[1028]: chronyd version 3.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +>
Sep 28 12:58:51 primary01.example.com chronyd[1028]: Frequency -112.344 +/- 1.716 ppm read from /var/lib/chrony/drift
Sep 28 12:58:51 primary01.example.com chronyd[1028]: Using right/UTC timezone to obtain leap second data
Sep 28 12:58:51 primary01.example.com systemd[1]: Started NTP client/server.
Sep 28 12:59:02 primary01.example.com chronyd[1028]: Selected source 111.235.248.121
Sep 28 12:59:02 primary01.example.com chronyd[1028]: System clock TAI offset set to 37 seconds
Sep 28 12:59:03 primary01.example.com chronyd[1028]: Source 218.161.118.185 replaced with 2406:2000:fc:437::2000
Sep 28 12:59:03 primary01.example.com chronyd[1028]: Received KoD RATE from 118.163.74.161
Sep 28 12:59:04 primary01.example.com chronyd[1028]: Selected source 49.213.184.242
Sep 28 13:00:07 primary01.example.com chronyd[1028]: Selected source 118.163.170.6
Packages
Since the installed edition of Linux is minimal one. We need to install all required packages (RPM) for installing Oracle 19c products later on both nodes.
Users, Groups and Directories
Although Oracle pre-installtion package installed some necessary users or groups for us, we need to create them again to make sure everything is ready.
Create Necessary Users and Groups
On both nodes.
[root@primary01 ~]# groupadd -g 54321 oinstall
groupadd: group 'oinstall' already exists
[root@primary01 ~]# groupadd -g 54322 dba
groupadd: group 'dba' already exists
[root@primary01 ~]# groupadd -g 54323 oper
groupadd: group 'oper' already exists
[root@primary01 ~]# groupadd -g 54324 backupdba
groupadd: group 'backupdba' already exists
[root@primary01 ~]# groupadd -g 54325 dgdba
groupadd: group 'dgdba' already exists
[root@primary01 ~]# groupadd -g 54326 kmdba
groupadd: group 'kmdba' already exists
[root@primary01 ~]# groupadd -g 54327 asmdba
[root@primary01 ~]# groupadd -g 54328 asmoper
[root@primary01 ~]# groupadd -g 54329 asmadmin
[root@primary01 ~]# groupadd -g 54330 racdba
groupadd: group 'racdba' already exists
[root@primary01 ~]# useradd -u 54322 -g oinstall -G asmoper,asmadmin,asmdba,racdba grid
[root@primary01 ~]# id grid
uid=54322(grid) gid=54321(oinstall) groups=54321(oinstall),54330(racdba),54327(asmdba),54328(asmoper),54329(asmadmin)
[root@primary01 ~]# usermod -a -G asmdba oracle
[root@primary01 ~]# id oracle
uid=54321(oracle) gid=54321(oinstall) groups=54321(oinstall),54322(dba),54323(oper),54324(backupdba),54325(dgdba),54326(kmdba),54330(racdba),54327(asmdba)
As you can see, Oracle pre-installation package has already created some groups for us.
User Credentials
grid and oracle still have no password, we should provide passwords for them.
On both nodes.
[root@primary01 ~]# passwd grid
Changing password for user grid.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root@primary01 ~]# passwd oracle
Changing password for user oracle.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
Directories
On both nodes, we create necessary directories for grid and oracle.
[root@primary01 ~]# mkdir -p /u01/app/19.0.0/grid
[root@primary01 ~]# mkdir -p /u01/app/grid
[root@primary01 ~]# mkdir -p /u01/app/oracle/product/19.0.0/db_1
[root@primary01 ~]# chown -R grid:oinstall /u01
[root@primary01 ~]# chown -R oracle:oinstall /u01/app/oracle
[root@primary01 ~]# chmod -R 775 /u01/
Set Profiles for grid and oracle
First, we set profiles for user grid on both nodes.
[root@primary01 ~]# su - grid
[grid@primary01 ~]$ vi .bash_profile
...
# User specific environment and startup programs
umask 022
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/19.0.0/grid
LD_LIBRARY_PATH=$ORACLE_HOME/lib
TMP=/tmp
TMPDIR=/tmp
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH ORACLE_SID ORACLE_BASE ORACLE_HOME LD_LIBRARY_PATH TMP TMPDIR
export CV_ASSUME_DISTID=OEL8.1
[grid@primary01 ~]$ exit
logout
Now turn to user oracle on both nodes.
[root@primary01 ~]# su - oracle
[oracle@primary01 ~]$ vi .bash_profile
...
# User specific environment and startup programs
umask 022
ORACLE_SID=ORCLCDB1
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=$ORACLE_BASE/product/19.0.0/db_1
LD_LIBRARY_PATH=$ORACLE_HOME/lib
TMP=/tmp
TMPDIR=/tmp
PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
export PATH ORACLE_SID ORACLE_BASE ORACLE_HOME LD_LIBRARY_PATH TMP TMPDIR
export CV_ASSUME_DISTID=OEL8.1
[oracle@primary01 ~]$ exit
logout
Please change the number in ORACLE_SID for the second node.
Set Resource Limitations
On both nodes.
[root@primary01 ~]# vi /etc/security/limits.conf
...
grid soft nofile 1024
grid hard nofile 65536
grid soft nproc 2047
grid hard nproc 16384
grid soft stack 10240
grid hard stack 32768
grid soft memlock unlimited
grid hard memlock unlimited
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft stack 10240
oracle hard stack 32768
oracle soft memlock unlimited
oracle hard memlock unlimited
[root@primary01 ~]# sysctl -p
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
kernel.panic_on_oops = 1
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
net.ipv4.conf.all.rp_filter = 2
net.ipv4.conf.default.rp_filter = 2
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
Storage
In our design, we use a NAS which provides iSCSI sharing service. Our database servers play as iSCSI clients, so we need to install iSCSI initiator package. For Solaris database servers, you may check: How to Configure iSCSI Disk on Solaris.
As I mentioned earlier, I assume that you have already prepared a shared storage for database servers to connect. You may find a tutorial here: How to Build 12c RAC (1/6) - Building a NAS for Shared Storage of RAC.
First of all, install iSCSI initiator on both nodes.
[root@primary01 ~]# yum info iscsi-initiator-utils lsscsi
Last metadata expiration check: 0:00:25 ago on Mon 28 Sep 2020 01:36:42 PM CST.
Installed Packages
Name : lsscsi
Version : 0.30
Release : 1.el8
Architecture : x86_64
Size : 124 k
Source : lsscsi-0.30-1.el8.src.rpm
Repository : @System
From repo : anaconda
Summary : List SCSI devices (or hosts) and associated information
URL : http://sg.danny.cz/scsi/lsscsi.html
License : GPLv2+
Description : Uses information provided by the sysfs pseudo file system in
: Linux kernel 2.6 series to list SCSI devices or all SCSI hosts.
: Includes a "classic" option to mimic the output of "cat
: /proc/scsi/scsi" that has been widely used prior to the lk 2.6
: series.
:
: Author:
: --------
: Doug Gilbert <dgilbert(at)interlog(dot)com>
Available Packages
Name : iscsi-initiator-utils
Version : 6.2.0.878
Release : 4.gitd791ce0.0.1.el8
Architecture : i686
Size : 393 k
Source : iscsi-initiator-utils-6.2.0.878-4.gitd791ce0.0.1.el8.src.rpm
Repository : ol8_baseos_latest
Summary : iSCSI daemon and utility programs
URL : http://www.open-iscsi.org
License : GPLv2+
Description : The iscsi package provides the server daemon for the iSCSI
: protocol, as well as the utility programs used to manage it.
: iSCSI is a protocol for distributed disk access using SCSI
: commands sent over Internet Protocol networks.
Name : iscsi-initiator-utils
Version : 6.2.0.878
Release : 4.gitd791ce0.0.1.el8
Architecture : src
Size : 683 k
Source : None
Repository : ol8_baseos_latest
Summary : iSCSI daemon and utility programs
URL : http://www.open-iscsi.org
License : GPLv2+
Description : The iscsi package provides the server daemon for the iSCSI
: protocol, as well as the utility programs used to manage it.
: iSCSI is a protocol for distributed disk access using SCSI
: commands sent over Internet Protocol networks.
Name : iscsi-initiator-utils
Version : 6.2.0.878
Release : 4.gitd791ce0.0.1.el8
Architecture : x86_64
Size : 378 k
Source : iscsi-initiator-utils-6.2.0.878-4.gitd791ce0.0.1.el8.src.rpm
Repository : ol8_baseos_latest
Summary : iSCSI daemon and utility programs
URL : http://www.open-iscsi.org
License : GPLv2+
Description : The iscsi package provides the server daemon for the iSCSI
: protocol, as well as the utility programs used to manage it.
: iSCSI is a protocol for distributed disk access using SCSI
: commands sent over Internet Protocol networks.
Name : lsscsi
Version : 0.30
Release : 1.el8
Architecture : src
Size : 200 k
Source : None
Repository : ol8_baseos_latest
Summary : List SCSI devices (or hosts) and associated information
URL : http://sg.danny.cz/scsi/lsscsi.html
License : GPLv2+
Description : Uses information provided by the sysfs pseudo file system in
: Linux kernel 2.6 series to list SCSI devices or all SCSI hosts.
: Includes a "classic" option to mimic the output of "cat
: /proc/scsi/scsi" that has been widely used prior to the lk 2.6
: series.
:
: Author:
: --------
: Doug Gilbert <dgilbert(at)interlog(dot)com>
[root@primary01 ~]# yum -y install iscsi-initiator-utils lsscsi
Last metadata expiration check: 0:00:36 ago on Mon 28 Sep 2020 01:36:42 PM CST.
Package lsscsi-0.30-1.el8.x86_64 is already installed.
Dependencies resolved.
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
iscsi-initiator-utils
x86_64 6.2.0.878-4.gitd791ce0.0.1.el8 ol8_baseos_latest 378 k
Installing dependencies:
isns-utils-libs x86_64 0.99-1.el8 ol8_baseos_latest 105 k
iscsi-initiator-utils-iscsiuio
x86_64 6.2.0.878-4.gitd791ce0.0.1.el8 ol8_baseos_latest 101 k
Transaction Summary
================================================================================
Install 3 Packages
Total download size: 584 k
Installed size: 2.5 M
Downloading Packages:
(1/3): isns-utils-libs-0.99-1.el8.x86_64.rpm 316 kB/s | 105 kB 00:00
(2/3): iscsi-initiator-utils-iscsiuio-6.2.0.878 282 kB/s | 101 kB 00:00
(3/3): iscsi-initiator-utils-6.2.0.878-4.gitd79 714 kB/s | 378 kB 00:00
--------------------------------------------------------------------------------
Total 1.1 MB/s | 584 kB 00:00
warning: /var/cache/dnf/ol8_baseos_latest-e4c6155830ad002c/packages/isns-utils-libs-0.99-1.el8.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID ad986da3: NOKEY
Oracle Linux 8 BaseOS Latest (x86_64) 84 kB/s | 3.1 kB 00:00
Importing GPG key 0xAD986DA3:
Userid : "Oracle OSS group (Open Source Software group) <[email protected]>"
Fingerprint: 76FD 3DB1 3AB6 7410 B89D B10E 8256 2EA9 AD98 6DA3
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-oracle
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Installing : isns-utils-libs-0.99-1.el8.x86_64 1/3
Running scriptlet: isns-utils-libs-0.99-1.el8.x86_64 1/3
Installing : iscsi-initiator-utils-iscsiuio-6.2.0.878-4.gitd791ce 2/3
Running scriptlet: iscsi-initiator-utils-iscsiuio-6.2.0.878-4.gitd791ce 2/3
Installing : iscsi-initiator-utils-6.2.0.878-4.gitd791ce0.0.1.el8 3/3
Running scriptlet: iscsi-initiator-utils-6.2.0.878-4.gitd791ce0.0.1.el8 3/3
Verifying : isns-utils-libs-0.99-1.el8.x86_64 1/3
Verifying : iscsi-initiator-utils-iscsiuio-6.2.0.878-4.gitd791ce 2/3
Verifying : iscsi-initiator-utils-6.2.0.878-4.gitd791ce0.0.1.el8 3/3
Installed:
iscsi-initiator-utils-6.2.0.878-4.gitd791ce0.0.1.el8.x86_64
isns-utils-libs-0.99-1.el8.x86_64
iscsi-initiator-utils-iscsiuio-6.2.0.878-4.gitd791ce0.0.1.el8.x86_64
Complete!
Then enable and start iSCSI services on both nodes.
[root@primary01 ~]# systemctl enable iscsid
Created symlink /etc/systemd/system/multi-user.target.wants/iscsid.service → /usr/lib/systemd/system/iscsid.service.
[root@primary01 ~]# systemctl enable iscsi
[root@primary01 ~]# systemctl start iscsid
[root@primary01 ~]# systemctl start iscsi
Now, we discover all targets from NAS on both nodes. In this case, we prepared 8 iSCSI entents, 16 GB each to associate targets in NAS.
[root@primary01 ~]# iscsiadm -m discovery -t sendtargets -p nas
192.168.10.101:3260,257 iqn.2005-10.org.freenas.ctl:primary-target
[root@primary01 ~]# iscsiadm -m node --op update -n node.startup -v automatic
[root@primary01 ~]# iscsiadm -m node -p nas --login
Logging in to [iface: default, target: iqn.2005-10.org.freenas.ctl:primary-target, portal: 192.168.10.101,3260] (multiple)
Login to [iface: default, target: iqn.2005-10.org.freenas.ctl:asm-primary-target, portal: 192.168.10.101,3260] successful.
...
List all iSCSI lun by path.
[root@primary01 ~]# ll /dev/disk/by-path/
total 0
lrwxrwxrwx 1 root root 9 Sep 28 13:44 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-0 -> ../../sdb
lrwxrwxrwx 1 root root 9 Sep 28 13:44 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-1 -> ../../sdc
lrwxrwxrwx 1 root root 9 Sep 28 13:44 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-2 -> ../../sdd
lrwxrwxrwx 1 root root 9 Sep 28 13:44 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-3 -> ../../sde
...
Then we have to partition all the 4 shared disks only on one node. Usually, node 1 is our first choice. The procedures to partition a disk are all the same. So we take /dev/sdb for an example here.
[root@primary01 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xa2316ff2.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-209715199, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-209715199, default 209715199):
Created a new partition 1 of type 'Linux' and of size 100 GiB.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Let's see disks after partitioning.
[root@primary01 ~]# ll /dev/disk/by-path/
total 0
lrwxrwxrwx. 1 root root 9 Sep 28 13:52 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-0 -> ../../sdb
lrwxrwxrwx. 1 root root 10 Sep 28 13:52 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-0-part1 -> ../../sdb1
lrwxrwxrwx. 1 root root 9 Sep 28 13:56 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-1 -> ../../sdc
lrwxrwxrwx. 1 root root 10 Sep 28 13:56 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-1-part1 -> ../../sdc1
lrwxrwxrwx. 1 root root 9 Sep 28 13:56 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-2 -> ../../sdd
lrwxrwxrwx. 1 root root 10 Sep 28 13:56 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-2-part1 -> ../../sdd1
lrwxrwxrwx. 1 root root 9 Sep 28 13:56 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-3 -> ../../sde
lrwxrwxrwx. 1 root root 10 Sep 28 13:56 ip-192.168.10.101:3260-iscsi-iqn.2005-10.org.freenas.ctl:primary-target-lun-3-part1 -> ../../sde1
...
On the other node, i.e. node 2, we have to refresh the partition table to get the newest disk information.
[root@primary02 ~]# partprobe -s
...
Persistent Naming
Since those iSCSI disks require to be recognized as block devices and used by grid in the server, we should apply some persistent rules on them. This time, we don't use ASMLib, we use udev to solve the problem.
First of all, check their uuid on node 1.
[root@primary01 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdb1
36589cfc000000adffe11b1eaed4524b0
[root@primary01 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdc1
36589cfc000000528749509e95d7a4752
[root@primary01 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sdd1
36589cfc0000009a888c47ef263153fd7
[root@primary01 ~]# /usr/lib/udev/scsi_id -g -u -d /dev/sde1
36589cfc0000004539fdf0756d082d400
Set udev rules on both nodes. The owner should be grid and the group should be asmadmin
[root@primary01 ~]# vi /etc/udev/rules.d/99-oracle-asm-devices.rules
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36589cfc000000adffe11b1eaed4524b0", SYMLINK+="asm/disk01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36589cfc000000528749509e95d7a4752", SYMLINK+="asm/disk02", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36589cfc0000009a888c47ef263153fd7", SYMLINK+="asm/disk03", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/usr/lib/udev/scsi_id -g -u -d /dev/$parent", RESULT=="36589cfc0000004539fdf0756d082d400", SYMLINK+="asm/disk04", OWNER="grid", GROUP="asmadmin", MODE="0660"
Then reload the new rules on both node.
[root@primary01 ~]# partprobe -s
...
[root@primary01 ~]# udevadm control --reload-rules
Let's check the result.
[root@primary01 ~]# ll /dev/asm
total 0
lrwxrwxrwx 1 root root 7 Sep 28 14:34 disk01 -> ../sdb1
lrwxrwxrwx 1 root root 7 Sep 28 14:34 disk02 -> ../sdc1
lrwxrwxrwx 1 root root 7 Sep 28 14:34 disk03 -> ../sdd1
lrwxrwxrwx 1 root root 7 Sep 28 14:34 disk04 -> ../sde1
[root@primary01 ~]# ll /dev/sd[b-e]1*
brw-rw---- 1 grid asmadmin 8, 17 Sep 28 14:34 /dev/sdb1
brw-rw---- 1 grid asmadmin 8, 33 Sep 28 14:34 /dev/sdc1
brw-rw---- 1 grid asmadmin 8, 49 Sep 28 14:34 /dev/sdd1
brw-rw---- 1 grid asmadmin 8, 65 Sep 28 14:34 /dev/sde1
The owner of disks have been changed into grid:asmadmin. That is, they can be discovered by grid infra.
Software
This section, we upload and unzip Oracle installation software of grid infrastructure and oracle home to the first node.
Starting with Oracle Grid Infrastructure 12c Release 2 (12.2), installation and configuration of Oracle Grid Infrastructure software is simplified with image-based installation. That is to say, we have to upload and extract the source file in grid's ORACLE_HOME.
Upload
Suppose that you have uploaded both installation software to node 1.
- Grid Infrastructure
- Oracle Home
[root@primary01 ~]# ll /home/grid/sources
total 2821472
-rwxr-xr-x 1 grid oinstall 2889184573 Sep 28 14:40 LINUX.X64_193000_grid_home.zip
[root@primary01 ~]# ll /home/oracle/sources
total 2987996
-rwxr-xr-x 1 oracle oinstall 3059705302 Sep 28 14:46 LINUX.X64_193000_db_home.zip
Unzip
[root@primary01 ~]# su - grid
[grid@primary01 ~]$ unzip -q /home/grid/sources/LINUX.X64_193000_grid_home.zip -d $ORACLE_HOME
[grid@primary01 ~]$ exit
[root@primary01 ~]# su - oracle
[oracle@primary01 ~]$ unzip -q /home/oracle/sources/LINUX.X64_193000_db_home.zip -d $ORACLE_HOME
[oracle@primary01 ~]$ exit
Install Cluster Verification Utility
Since we have already had unzipped software, we need to install a very special package, Cluster Verification Utility, which will be used during installation of grid infrastructure.
On node 1.
[root@primary01 ~]# cp -p /u01/app/19.0.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm /tmp
[root@primary01 ~]# scp -p /u01/app/19.0.0/grid/cv/rpm/cvuqdisk-1.0.10-1.rpm primary02:/tmp
The authenticity of host 'primary02 (192.168.10.12)' can't be established.
ECDSA key fingerprint is SHA256:...
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'primary02,192.168.10.12' (ECDSA) to the list of known hosts.
root@primary02's password:
cvuqdisk-1.0.10-1.rpm
On both nodes.
[root@primary01 ~]# rpm -Uvh /tmp/cvuqdisk-1.0.10-1.rpm
Verifying... ################################# [100%]
Preparing... ################################# [100%]
Using default group oinstall to install package
Updating / installing...
1:cvuqdisk-1.0.10-1 ################################# [100%]
Check the result.
[root@primary01 ~]# rpm -q cvuqdisk
cvuqdisk-1.0.10-1.x86_64
Cluster Verification
Before we install Grid infra, we should verify everything is ready for grid installation. Here we use Oracle Cluster Verification Utility (CLUVFY) to check if there's any problem by grid.
[grid@primary01 ~]$ cd $ORACLE_HOME
[grid@primary01 grid]$ ./runcluvfy.sh stage -pre crsinst -n primary01,primary02 -verbose
ERROR:
PRVG-10467 : The default Oracle Inventory group could not be determined.
Verifying User Equivalence ...FAILED (PRVG-2019, PRKC-1191)
PRVF-4009 : User equivalence is not set for nodes: primary02
Verification will proceed with nodes: primary01
Verifying Physical Memory ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary01 3.6695GB (3847752.0KB) 8GB (8388608.0KB) failed
Verifying Physical Memory ...FAILED (PRVF-7530)
Verifying Available Physical Memory ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary01 3.219GB (3375316.0KB) 50MB (51200.0KB) passed
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary01 3.9531GB (4145148.0KB) 3.6695GB (3847752.0KB) passed
Verifying Swap Size ...PASSED
Verifying Free Space: primary01:/usr,primary01:/var,primary01:/etc,primary01:/sbin,primary01:/tmp ...
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr primary01 / 78.4101GB 25MB passed
/var primary01 / 78.4101GB 5MB passed
/etc primary01 / 78.4101GB 25MB passed
/sbin primary01 / 78.4101GB 10MB passed
/tmp primary01 / 78.4101GB 1GB passed
Verifying Free Space: primary01:/usr,primary01:/var,primary01:/etc,primary01:/sbin,primary01:/tmp ...PASSED
Verifying User Existence: grid ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary01 passed exists(54322)
Verifying Users With Same UID: 54322 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying Group Existence: asmadmin ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary01 passed exists
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmdba ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary01 passed exists
Verifying Group Existence: asmdba ...PASSED
Verifying Group Membership: asmdba ...
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
primary01 yes yes yes passed
Verifying Group Membership: asmdba ...PASSED
Verifying Group Membership: asmadmin ...
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
primary01 yes yes yes passed
Verifying Group Membership: asmadmin ...PASSED
Verifying Run Level ...
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
primary01 3 3,5 passed
Verifying Run Level ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...
Node Name Status
------------------------------------ ------------------------
primary01 passed
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary01 cvuqdisk-1.0.10-1 cvuqdisk-1.0.10-1 passed
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Host name ...PASSED
Verifying Node Connectivity ...
Verifying Hosts File ...
Node Name Status
------------------------------------ ------------------------
primary01 passed
Verifying Hosts File ...PASSED
Interface information for node "primary01"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
ens33 192.168.0.11 192.168.0.0 0.0.0.0 192.168.0.12 9 00:0C:29:DC:65:90 1500
ens34 192.168.24.11 192.168.24.0 0.0.0.0 192.168.0.12 9 00:0C:29:DC:65:9A 1500
ens33 2402:7500:52d:1a76:20c:29ff:fedc:6590 2402:7500:52d:1a76:0:0:0:0 UNKNOWN :: 00:0C:29:DC:65:90 1500
Check: MTU consistency of the subnet "192.168.0.0".
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
primary01 ens33 192.168.0.11 192.168.0.0 1500
Check: MTU consistency of the subnet "2402:7500:52d:1a76:0:0:0:0".
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
primary01 ens33 2402:7500:52d:1a76:20c:29ff:fedc:6590 2402:7500:52d:1a76:0:0:0:0 1500
Check: MTU consistency of the subnet "192.168.24.0".
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
primary01 ens34 192.168.24.11 192.168.24.0 1500
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast or broadcast check ...
Checking subnet "192.168.0.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast or broadcast check ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
primary01 0022 0022 passed
Verifying User Mask ...PASSED
Verifying User Not In Group "root": grid ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary01 passed does not exist
Verifying User Not In Group "root": grid ...PASSED
Verifying Time zone consistency ...PASSED
Verifying resolv.conf Integrity ...
Node Name Status
------------------------------------ ------------------------
primary01 passed
checking response for name "primary01" from each of the name servers specified
in "/etc/resolv.conf"
Node Name Source Comment Status
------------ ------------------------ ------------------------ ----------
primary01 192.168.0.199 IPv4 passed
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...
Node Name Configured Status
------------ ------------------------ ------------------------
primary01 no passed
Node Name Running? Status
------------ ------------------------ ------------------------
primary01 no passed
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...
Node Name Configured Status
------------ ------------------------ ------------------------
primary01 no passed
Node Name Running? Status
------------ ------------------------ ------------------------
primary01 no passed
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying User Equivalence ...PASSED
Verifying RPM Package Manager database ...INFORMATION (PRVG-11250)
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying DefaultTasksMax parameter ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED
Pre-check for cluster services setup was unsuccessful on all the nodes.
Failures were encountered during execution of CVU verification request "stage -pre crsinst".
Verifying User Equivalence ...FAILED
primary02: PRVG-2019 : Check for equivalence of user "grid" from node
"primary01" to node "primary02" failed
PRKC-1191 : Remote command execution setup check for node primary02
using shell /usr/bin/ssh failed.
No ECDSA host key is known for primary02 and you have requested
strict checking.Host key verification failed.
Verifying Physical Memory ...FAILED
primary01: PRVF-7530 : Sufficient physical memory is not available on node
"primary01" [Required physical memory = 8GB (8388608.0KB)]
Verifying RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.
CVU operation performed: stage -pre crsinst
Date: Oct 12, 2020 1:25:41 PM
CVU home: /u01/app/19.0.0/grid/
User: grid
As we can see, there're some problems:
- Oracle Inventory group
- User equivalence
- Physical memory
- Root Privilege
PRVG-10467 : The default Oracle Inventory group could not be determined.
For more details about PRVG-10467, you may check How to Resolve PRVG-10467 : The default Oracle Inventory group could not be determined.
PRVF-4009 : User equivalence is not set for nodes: primary02
or this:
PRVG-2019 : Check for equivalence of user "grid" from node
Yes, we have not set user equivalency for grid yet, we'd like to establish SSH equivalency during grid installation, so this error is also normal. To solve it, you may check the post: How to Resolve PRVG-2019 : Check for equivalence of user from node.
PRVF-7530 : Sufficient physical memory is not available on node
That's true, we didn't reach the requirement of physical memory, but it can be ignored.
PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges.
For more details about PRVG-11250, you may check How to Resolve PRVG-11250 : The check "RPM Package Manager database" was not performed because it needs 'root' user privileges.
No major problem here, so we can just keep going.
By the way, if runcluvfy.sh seems stuck after issuing, you may revert scp fix for OpenSSH 8.x problem, then try again.
A perfect result may look like this:
[grid@primary01 ~]$ cd $ORACLE_HOME
[grid@primary01 grid]$ ./runcluvfy.sh stage -pre crsinst -n primary01,primary02 -verbose -method root
Enter "ROOT" password:
Verifying Physical Memory ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary02 9.45GB (9909016.0KB) 8GB (8388608.0KB) passed
primary01 9.45GB (9909016.0KB) 8GB (8388608.0KB) passed
Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary02 9.2411GB (9690000.0KB) 50MB (51200.0KB) passed
primary01 8.9472GB (9381808.0KB) 50MB (51200.0KB) passed
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary02 9.9922GB (1.0477564E7KB) 9.45GB (9909016.0KB) passed
primary01 9.9922GB (1.0477564E7KB) 9.45GB (9909016.0KB) passed
Verifying Swap Size ...PASSED
Verifying Free Space: primary02:/usr,primary02:/var,primary02:/etc,primary02:/sbin,primary02:/tmp ...
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr primary02 / 154.2051GB 25MB passed
/var primary02 / 154.2051GB 5MB passed
/etc primary02 / 154.2051GB 25MB passed
/sbin primary02 / 154.2051GB 10MB passed
/tmp primary02 / 154.2051GB 1GB passed
Verifying Free Space: primary02:/usr,primary02:/var,primary02:/etc,primary02:/sbin,primary02:/tmp ...PASSED
Verifying Free Space: primary01:/usr,primary01:/var,primary01:/etc,primary01:/sbin,primary01:/tmp ...
Path Node Name Mount point Available Required Status
---------------- ------------ ------------ ------------ ------------ ------------
/usr primary01 / 141.1523GB 25MB passed
/var primary01 / 141.1523GB 5MB passed
/etc primary01 / 141.1523GB 25MB passed
/sbin primary01 / 141.1523GB 10MB passed
/tmp primary01 / 141.1523GB 1GB passed
Verifying Free Space: primary01:/usr,primary01:/var,primary01:/etc,primary01:/sbin,primary01:/tmp ...PASSED
Verifying User Existence: grid ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary02 passed exists(54322)
primary01 passed exists(54322)
Verifying Users With Same UID: 54322 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying Group Existence: asmadmin ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary02 passed exists
primary01 passed exists
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmdba ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary02 passed exists
primary01 passed exists
Verifying Group Existence: asmdba ...PASSED
Verifying Group Existence: oinstall ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary02 passed exists
primary01 passed exists
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: asmdba ...
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
primary02 yes yes yes passed
primary01 yes yes yes passed
Verifying Group Membership: asmdba ...PASSED
Verifying Group Membership: asmadmin ...
Node Name User Exists Group Exists User in Group Status
---------------- ------------ ------------ ------------ ----------------
primary02 yes yes yes passed
primary01 yes yes yes passed
Verifying Group Membership: asmadmin ...PASSED
Verifying Group Membership: oinstall(Primary) ...
Node Name User Exists Group Exists User in Group Primary Status
---------------- ------------ ------------ ------------ ------------ ------------
primary02 yes yes yes yes passed
primary01 yes yes yes yes passed
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Run Level ...
Node Name run level Required Status
------------ ------------------------ ------------------------ ----------
primary02 3 3,5 passed
primary01 3 3,5 passed
Verifying Run Level ...PASSED
Verifying Hard Limit: maximum open file descriptors ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
primary02 hard 65536 65536 passed
primary01 hard 65536 65536 passed
Verifying Hard Limit: maximum open file descriptors ...PASSED
Verifying Soft Limit: maximum open file descriptors ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
primary02 soft 1024 1024 passed
primary01 soft 1024 1024 passed
Verifying Soft Limit: maximum open file descriptors ...PASSED
Verifying Hard Limit: maximum user processes ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
primary02 hard 16384 16384 passed
primary01 hard 16384 16384 passed
Verifying Hard Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum user processes ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
primary02 soft 2047 2047 passed
primary01 soft 2047 2047 passed
Verifying Soft Limit: maximum user processes ...PASSED
Verifying Soft Limit: maximum stack size ...
Node Name Type Available Required Status
---------------- ------------ ------------ ------------ ----------------
primary02 soft 10240 10240 passed
primary01 soft 10240 10240 passed
Verifying Soft Limit: maximum stack size ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...
Node Name Status
------------------------------------ ------------------------
primary02 passed
primary01 passed
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...
Node Name Available Required Status
------------ ------------------------ ------------------------ ----------
primary02 cvuqdisk-1.0.10-1 cvuqdisk-1.0.10-1 passed
primary01 cvuqdisk-1.0.10-1 cvuqdisk-1.0.10-1 passed
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Host name ...PASSED
Verifying Node Connectivity ...
Verifying Hosts File ...
Node Name Status
------------------------------------ ------------------------
primary01 passed
primary02 passed
Verifying Hosts File ...PASSED
Interface information for node "primary01"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
ens33 192.168.1.11 192.168.1.0 0.0.0.0 192.168.1.1 00:0C:29:29:C9:EC 1500
ens34 192.168.24.11 192.168.24.0 0.0.0.0 192.168.1.1 00:0C:29:29:C9:F6 1500
Interface information for node "primary02"
Name IP Address Subnet Gateway Def. Gateway HW Address MTU
------ --------------- --------------- --------------- --------------- ----------------- ------
ens33 192.168.1.12 192.168.1.0 0.0.0.0 192.168.1.1 00:0C:29:64:5D:93 1500
ens34 192.168.24.12 192.168.24.0 0.0.0.0 192.168.1.1 00:0C:29:64:5D:9D 1500
Check: MTU consistency of the subnet "192.168.1.0".
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
primary01 ens33 192.168.1.11 192.168.1.0 1500
primary02 ens33 192.168.1.12 192.168.1.0 1500
Check: MTU consistency of the subnet "192.168.24.0".
Node Name IP Address Subnet MTU
---------------- ------------ ------------ ------------ ----------------
primary01 ens34 192.168.24.11 192.168.24.0 1500
primary02 ens34 192.168.24.12 192.168.24.0 1500
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Source Destination Connected?
------------------------------ ------------------------------ ----------------
primary01[ens33:192.168.1.11] primary02[ens33:192.168.1.12] yes
Source Destination Connected?
------------------------------ ------------------------------ ----------------
primary01[ens34:192.168.24.11] primary02[ens34:192.168.24.12] yes
Verifying subnet mask consistency for subnet "192.168.1.0" ...PASSED
Verifying subnet mask consistency for subnet "192.168.24.0" ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast or broadcast check ...
Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"
Verifying Multicast or broadcast check ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...
Node Name Available Required Comment
------------ ------------------------ ------------------------ ----------
primary02 0022 0022 passed
primary01 0022 0022 passed
Verifying User Mask ...PASSED
Verifying User Not In Group "root": grid ...
Node Name Status Comment
------------ ------------------------ ------------------------
primary02 passed does not exist
primary01 passed does not exist
Verifying User Not In Group "root": grid ...PASSED
Verifying Time zone consistency ...PASSED
Verifying Time offset between nodes ...PASSED
Verifying resolv.conf Integrity ...
Node Name Status
------------------------------------ ------------------------
primary01 passed
primary02 passed
checking response for name "primary02" from each of the name servers specified
in "/etc/resolv.conf"
Node Name Source Comment Status
------------ ------------------------ ------------------------ ----------
primary02 192.168.1.199 IPv4 passed
checking response for name "primary01" from each of the name servers specified
in "/etc/resolv.conf"
Node Name Source Comment Status
------------ ------------------------ ------------------------ ----------
primary01 192.168.1.199 IPv4 passed
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...
Node Name Configured Status
------------ ------------------------ ------------------------
primary02 no passed
primary01 no passed
Node Name Running? Status
------------ ------------------------ ------------------------
primary02 no passed
primary01 no passed
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...
Node Name Configured Status
------------ ------------------------ ------------------------
primary02 no passed
primary01 no passed
Node Name Running? Status
------------ ------------------------ ------------------------
primary02 no passed
primary01 no passed
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying User Equivalence ...PASSED
Verifying RPM Package Manager database ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying DefaultTasksMax parameter ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED
Pre-check for cluster services setup was successful.
CVU operation performed: stage -pre crsinst
Date: Jul 17, 2020 2:39:32 PM
CVU home: /u01/app/19.0.0/grid/
User: grid
More Fixes
Various platforms may have various problems, if you met them, here are some fixed that may be helpful.
Version Fix
When you started OUI, you may see INS-08101 Unexpected error while executing the action at state: 'supportedOSCheck', if you were using Enterprise Linux 8 or above. You should set CV_ASSUME_DISTID environment variable to a lower value to solve INS-08101, say OEL7.8 to make OUI think it's a compatible OS release.
Shared Memory
For some servers, you may need to tune shared memory manually.
[root@primary01 ~]# vi /etc/fstab
...
tmpfs /dev/shm tmpfs size=6g 0 0
Or use MB as the unit.
tmpfs /dev/shm tmpfs size=6144m 0 0
To make it take effect immediately, you can do this:
[root@primary01 ~]# mount -o remount /dev/shm
scp Fix
Since we choose Enterprise Linux 8.1 for our database servers, we should fix scp for INS-06006 problem during installation.
DNS Fix
DNS listed in /etc/resolv.conf should be all internal DNS for solving the hostnames of database servers, otherwise, a DNS warning may occur during system check of installation.
To fix the problem, you should modify the configuration of network interface. For example, I only keep the first DNS and comment out the rests in the network interface.
[root@primary01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
...
DNS1=192.168.10.199
#DNS2=...
#DNS3=...
Everything seems ready, we can go for installing Oracle 19c grid infrastructure.
Thanks, dude. I know what I’M doing tonight ;-P
My pleasure!
Perfect guide! My thanks! Started first RAC cluster today by myself
Good for you!